- Nov 14, 2011
- 10,411
- 5,677
- 136
Do you still believe that discrete GPU's have a future?
What do you base that ludicrous belief on? Drugs?
Because everything says that IGP's are getting to be "good enough" for a big enough swath of the market (and that very much includes most gamers - look at the game consoles, for chrissake! You are aware that modern game consoles are IGP's, right?) that the discrete GPU model isn't financially viable in the long run.
So your argument is exactly the wrong way around. It's not that the IGP's can't have a adequate market size, it's the discrete GPU's that have market size problems.
And the IGP's are very much moving in the direction of the GPU being more of an general accelerator (AMD calls the combination "APU"s, obviously). And one of the big advantages of integration (apart from just the traditional advantages of fewer chips etc) is that it makes it much easier to share cache hierarchies and be much more tightly coupled at a software level too. Sharing the virtual address space between GPU and CPU threads means less need for copying, and cache coherency makes a lot of things easier and more likely to work well.
We've seen this before, outside of graphics. Sure, you can use MPI on a cluster, and get great performance for some very specific loads. But ask yourself why everybody ends up wanting SMP in the end anyway. The cluster people were simply wrong when they tried to convince people how hardware cache coherency is too expensive. It's just too complicated to come up with efficient programming in a cluster environment.
The exact same is true in GPU's too. People have spent tons of effort into working around the cluster problems, and lots of the graphical libraries and interfaces (think OpenGL) are basically the equivalent of MPI. But look at the direction the industry is actually going: thanks to integration it actually starts making sense to look at tighter couplings not just on a hardware level, but on a software level. Which is why you see all the vendors starting to bring out their "close to metal" models - when you can do memory allocations that "just work" for both the CPU and the GPU, and can pass pointers around, the whole model changes.
And it changes for the better. It's more efficient.
Discrete GPU's are a historical artifact. They're going away. They are inferior technology, and there isn't a big enough market to support them.
Linus
http://www.realworldtech.com/forum/?threadid=141700&curpostid=141714