- Jun 5, 2008
- 1,671
- 874
- 146
I'm curious how everyone feels about the future of these newly emerging on-chip and on-die graphics solutions (such as the Intel HD graphics and AMD Fusion). Do you think they will become the de facto standard from now on or eventually blow over?
Obviously for some time to come any serious enthusiast will still need a standalone graphics card, and if they bundle graphics chips on-die with GPUs you could very well be stuck paying for something you don't need, and perhaps in the future we may not even be able to disable on these on-die GPUs.
I don't know much about the architecture of such things, but do you think it will be possible for the performance of an on-die GPU to surpass that of a standalone card? There are some certain benefits I can think of such as sharing cache and system memory, but given the monolithic size of current high end GPUs I don't see how any chip manufacturer could afford to make a high end GPU on chip.
I hope the trend towards more "programmable" GPUs continues. The new Fermi architecture seems to be a step in the right direction. Having more general purpose GPU architecture may be slightly less than optimal for maximum performance, but at least it opens up so many new possibilities. Maybe eventually we will arrive at a point where you don't need to buy a new card to gain new DirectX functionality, but just reflash your firmware.
One possibility I find interesting is that if on-die graphics become suitable enough, maybe the video card as we know it could fade away. If the on-die graphics is capable of driving all your displays, heavy duty graphics calculations could be offloaded to some general purpose processing card (like the Intel Larrabee) on some kind of high speed port.
I haven't seen much discussion on this technology that I consider a very exciting... please chip in! How will this effect motherboard manufacture and profits for CPU and GPU companies? With the possibility to have graphics, memory controller, pci controller and more on-die, the role of the motherboard manufacturer seems like its ready for change too.
Obviously for some time to come any serious enthusiast will still need a standalone graphics card, and if they bundle graphics chips on-die with GPUs you could very well be stuck paying for something you don't need, and perhaps in the future we may not even be able to disable on these on-die GPUs.
I don't know much about the architecture of such things, but do you think it will be possible for the performance of an on-die GPU to surpass that of a standalone card? There are some certain benefits I can think of such as sharing cache and system memory, but given the monolithic size of current high end GPUs I don't see how any chip manufacturer could afford to make a high end GPU on chip.
I hope the trend towards more "programmable" GPUs continues. The new Fermi architecture seems to be a step in the right direction. Having more general purpose GPU architecture may be slightly less than optimal for maximum performance, but at least it opens up so many new possibilities. Maybe eventually we will arrive at a point where you don't need to buy a new card to gain new DirectX functionality, but just reflash your firmware.
One possibility I find interesting is that if on-die graphics become suitable enough, maybe the video card as we know it could fade away. If the on-die graphics is capable of driving all your displays, heavy duty graphics calculations could be offloaded to some general purpose processing card (like the Intel Larrabee) on some kind of high speed port.
I haven't seen much discussion on this technology that I consider a very exciting... please chip in! How will this effect motherboard manufacture and profits for CPU and GPU companies? With the possibility to have graphics, memory controller, pci controller and more on-die, the role of the motherboard manufacturer seems like its ready for change too.
