Interface widths as well as the number of IOs required on chipsets continued to increase, driving chipset die areas up once again. Each Intel CPU sold needed at least one other Intel chip built on a previous generation node. As the PC industry grew, so did shipments of Intel chipsets. This waterfall effect allowed Intel to help get more mileage out of its older fabs, which made the accountants at Intel quite happy as those $2 - $3B buildings are painfully useless once obsolete. If Intel was building a 45nm CPU, the chipset would be built on 65nm or 90nm. Intel’s chipsets were always built on a n-1 or n-2 process. Once again, it had to do with IO demands on chipset die area. Intel’s GPU leadership needed another approach.Ī few years ago they got that break. Intel was a pure blooded CPU company, and the GPU industry wasn’t interesting enough at the time. Intel hired some very passionate graphics engineers, who always petitioned Intel management to give them more die area to work with, but the answer always came back no.
Investing in people and infrastructure to support something you’re giving away for free never made a lot of sense.
It also didn’t make sense to focus on things like driver optimizations and image quality. High performance GPUs need lots of transistors, something Intel would never give its graphics architects - they only got the bare minimum. Intel didn’t care about graphics, it just had some free space on a necessary piece of silicon and decided to do something with it. As a result, Intel’s integrated graphics was never particularly good. The budget for Intel graphics was always whatever free space remained once all other necessary controllers in the North Bridge were accounted for. Intel’s solution was to give graphics away for free. In the late 90s Micron saw this problem and contemplating throwing some 元 cache onto its North Bridges. Intel effectively had some free space on its North Bridge die to do whatever it wanted with. Eventually Intel ended up in a situation where IO dictated a minimum die area for the chipset, but the actual controllers driving that IO didn’t need all of that die area. As the number of supported IO interfaces increased (back then we were talking about things like AGP, FSB), the size of the North Bridge die had to increase in order to accommodate all of the external facing IO.
Probably the people developing Beignet know how to program the HD graphics hardware without OpenCL then.As Intel got into the chipset business it quickly found itself faced with an interesting problem. I found beignet which is open source attempt to add support to Linux at least for Ivy Bridge HD graphics. Their is no Intel support for HD graphics and OpenCL on Linux. I'm curious to know how one can program the Intel HD Graphics hardware for compute (e.g. It seems clear which direction Intel desktop processors are headed.Īs far as I have seen the Intel IGP, however, is mostly ignored by programmers with the exception of OpenCL/OpenGL. The Iris Pro 5100 takes up over 30% of the silicon now. Additionally, the IGP keeps eating up more silicon. Combining the cores with the IGP together would even be better. Some algorithms will therefore run even faster on the IGP. But the latest Intel IGP (Iris Pro 5100/5200) has a peak of over 800 GFLOPS. The Peak GFLOPS of the the cores for the Desktop i7-4770k 4GHz is 4GHz * 8 (AVX) * (4 FMA) * 4 cores = 512 GFLOPS.