- Nov 2, 2013
- 206
- 35
- 91
I've been wondering quite som time now how intels GPU architecture compares to amd and nvidia? What I mean is that nvidia and amd/ati has been neck to neck from the beginning of 3d graphics and they are quite far aheat compared to intels solutions which it has successfully integrated to cpu and by that move basically taken the whole GFX market share even though it is so much inefficient compared to the competition.
Can someone please explain to me how does intel lack so much behind amd/nvidia based on graphics performance? Does it have significantly less transistors or is the achitecture just bad? If so, how is that and why don't I hear news about intel graphics core next or something like that? For years they have had those EU:s which I don't understand and they have been just adding more of those generation after another and added some more features. To me this just feels like going the easy route and use the massive manufacturing capabilities and just brute force your way out of bad design by adding more transistors? Will this route be viable option to be competitive in like 5 years from now? To me it feels like intel is using it recourses to catch up or is this just layman's feeling?
---------------------------------------------------------------------------------------------------------------
For example I have i3-4130 and the GPU core chokes the minute I turn any options in madvr that uses the GPU for improve the video quality(basically any upscaling and especially nnedi3). But with low level amd card you can do this no problem but probably not with the highest settings. So why is intels GPU so bad at this regard, but still you can do GPGPU stuff and play some older games? Is this going to change in clovertrail or skylake... or never?
- What is the fundamental reason intel's GPU to be so much behind compared to competition?
- How does intel gfx compare to the competiton based on transistor count?
- Performance/transistors
- Perf/watt
Can someone please explain to me how does intel lack so much behind amd/nvidia based on graphics performance? Does it have significantly less transistors or is the achitecture just bad? If so, how is that and why don't I hear news about intel graphics core next or something like that? For years they have had those EU:s which I don't understand and they have been just adding more of those generation after another and added some more features. To me this just feels like going the easy route and use the massive manufacturing capabilities and just brute force your way out of bad design by adding more transistors? Will this route be viable option to be competitive in like 5 years from now? To me it feels like intel is using it recourses to catch up or is this just layman's feeling?
---------------------------------------------------------------------------------------------------------------
For example I have i3-4130 and the GPU core chokes the minute I turn any options in madvr that uses the GPU for improve the video quality(basically any upscaling and especially nnedi3). But with low level amd card you can do this no problem but probably not with the highest settings. So why is intels GPU so bad at this regard, but still you can do GPGPU stuff and play some older games? Is this going to change in clovertrail or skylake... or never?
- What is the fundamental reason intel's GPU to be so much behind compared to competition?
- How does intel gfx compare to the competiton based on transistor count?
- Performance/transistors
- Perf/watt
