How dows intel's graphics architehture really compare against competition(nvidia/amd)

stuff_me_good

Senior member
Nov 2, 2013
206
35
91
I've been wondering quite som time now how intels GPU architecture compares to amd and nvidia? What I mean is that nvidia and amd/ati has been neck to neck from the beginning of 3d graphics and they are quite far aheat compared to intels solutions which it has successfully integrated to cpu and by that move basically taken the whole GFX market share even though it is so much inefficient compared to the competition.


Can someone please explain to me how does intel lack so much behind amd/nvidia based on graphics performance? Does it have significantly less transistors or is the achitecture just bad? If so, how is that and why don't I hear news about intel graphics core next or something like that? For years they have had those EU:s which I don't understand and they have been just adding more of those generation after another and added some more features. To me this just feels like going the easy route and use the massive manufacturing capabilities and just brute force your way out of bad design by adding more transistors? Will this route be viable option to be competitive in like 5 years from now? To me it feels like intel is using it recourses to catch up or is this just layman's feeling?

---------------------------------------------------------------------------------------------------------------
For example I have i3-4130 and the GPU core chokes the minute I turn any options in madvr that uses the GPU for improve the video quality(basically any upscaling and especially nnedi3). But with low level amd card you can do this no problem but probably not with the highest settings. So why is intels GPU so bad at this regard, but still you can do GPGPU stuff and play some older games? Is this going to change in clovertrail or skylake... or never?


- What is the fundamental reason intel's GPU to be so much behind compared to competition?
- How does intel gfx compare to the competiton based on transistor count?
- Performance/transistors
- Perf/watt
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
For the given amount of silicon, transistors and memory bandwidth limitations it seems to be competitive architecturally. Its just not a 300W 7 billion 384 bit memory bus card, its constrained in the CPU its in.
 

stuff_me_good

Senior member
Nov 2, 2013
206
35
91
I get that, but how do you explain amd APU's which are so much better than any of intel's HD4xxx iGPU's when there are the same limits? AMD even have way worse manufacturing capabilities and yet it has so much better iGPU's than intel.
 

hawtdawg

Golden Member
Jun 4, 2005
1,223
7
81
Intel's Iris Pro is just as good or better than AMD's APU's. I'll try to find a link, but it's also comparable to Maxwell in performance per watt, granted it's on a smaller process node. In short, Intel could dominate the discrete GPU market if they wanted to, but it wouldn't be a good investment with mobile products appearing to be the future.
 

DominionSeraph

Diamond Member
Jul 22, 2009
8,386
32
91
AMD APUs are much bigger. Their CPU performance is lower than an i3 while being considerably larger than an i5.
 
Last edited:

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
- What is the fundamental reason intel's GPU to be so much behind compared to competition?
- How does intel gfx compare to the competiton based on transistor count?
- Performance/transistors
- Perf/watt

Good

Strong geometry and ROP performance
Relatively good compute performance
Relatively power efficient though on the same process it would likely use more power than current AMD or Nvidia chips.

Bad
Poor AA and texturing performance
Poor scaling (architectural bottlenecks)
Requires a lot of die space
REALLY POOR DRIVERS (though they are constantly getting better)
 

Blitzvogel

Platinum Member
Oct 17, 2010
2,012
23
81
Intel's latest couple gens are quite competitive/transistor. Having a node advantage and the ability to really ramp up clock speeds much higher than AMD and Nvidia GPUs on bigger nodes gives them an immense advantage in terms of performance/transistor/die area. Biggest issue though is that the CPU die has to absorb both iGP and CPU TDW. For some of the faster clocked or thermally constrained chips, you can only run max speed on one or the other part of the die for an extended period. I assume this is an issue mostly for mobile applications, which are the main arena of need for improved Intel iGPs.

It would be interesting to see them create a large dGPU based on their current architecture but whether or not it could be successful and competitive is dubious at best considering uarch scaling, the need for dedicated memory and the associated interfaces involved.

One also must discuss and weigh the merits of the future of the dGPU versus iGPs. Upgrading an iGP means replacing the entire processor, upgrading a dGPU just means putting in a new card. You also can't always get the CPU/iGP configuration or performance you want with an Intel chip, especially when you're weighing in the product cost. The Intel GT3e chips cost a poop ton, and the cost associated with the iGP would be better sunk into a graphics card that will have much better performance.
 
Last edited:

ocre

Golden Member
Dec 26, 2008
1,594
7
81
Intel's Iris Pro is just as good or better than AMD's APU's. I'll try to find a link, but it's also comparable to Maxwell in performance per watt, granted it's on a smaller process node. In short, Intel could dominate the discrete GPU market if they wanted to, but it wouldn't be a good investment with mobile products appearing to be the future.

I absolutely agree that Intel is making some insane progress but i highly doubt that there is any chance that intel would/could use this tech in an intel dGPU.

Why?

Because intel and nvidia cross licensing agreement. Although we dont know the specifics nor would we ever see the contract, Intel pays a hefty sum to Nvidia to use their technologies. And I would wager its not just ironic that intel igps started leap frog jumping after this agreement was made public.

There is very little doubt in my mind that the modern intel igp uses a lot of nvidia tech. Intel can apply the technology radically different and I think they have done very very well. But you better believe that after intel threatened to shack up the dGPU market with larrabee, Nvidia was smart enough to have provisions which prevent Intel from doing such a thing in the deal they worked out.

So, I can commend intel for the great progress they have made on their IGP. It is impressive in so many ways. But i highly highly doubt there will be an intel HD series dGPU anytime soon.

Besides, we have seen some of these technologies in dGPUs already. ....Nvidia's
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
I absolutely agree that Intel is making some insane progress but i highly doubt that there is any chance that intel would/could use this tech in an intel dGPU.

Why?

Margins. Then the rest doesnt matter.
 

MisterLilBig

Senior member
Apr 15, 2014
291
0
76
Seriously doubt NV has any upper hand towards Intel. They are competitors now more than ever.

Besides, we have seen some of these technologies in dGPUs already. ....Nvidia's

Care to mention some of those technologies?
 

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
I absolutely agree that Intel is making some insane progress but i highly doubt that there is any chance that intel would/could use this tech in an intel dGPU.

Why?

Because it's not a big enough market for Intel to care about nor does it fit into their portfolio anywhere. NVidia generated about $4billion in revenue in 2013 from everything they make. Intel was over $50 billion. The amount of R&D Intel would have to invest in trying to catch up to NVidia, would not be worth the profits it would generate on the other end.
 

Pottuvoi

Senior member
Apr 16, 2012
416
2
81
Yes, Intel has been quite impressive of late.
Currently they seem to be the company which introduces new features for GPUs. (IE pixelsync.)

If they continue to do this with every tock, they will keep the feature lead as AMD and Nvidia have been really slow on that front.
 
Last edited:

stuff_me_good

Senior member
Nov 2, 2013
206
35
91
If intel iGPU is so great as every one is touting, how do you explain it's inability to do anything with madvr up/downscaling options(basically any of them, especially nnedi3)?

Because it's not a big enough market for Intel to care about nor does it fit into their portfolio anywhere. NVidia generated about $4billion in revenue in 2013 from everything they make. Intel was over $50 billion. The amount of R&D Intel would have to invest in trying to catch up to NVidia, would not be worth the profits it would generate on the other end.

Well this seems odd conclusion since not long ago intel was so eager to introduce larrabee to the market as the messiah of graphics. But in the end it failed badly and they had to quietly sweet it under the carpet. And now that they presumably have competitive GPU technology(as everybody are so much touting about it), they are doing nothing.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
AMD APUs are much bigger. Their CPU performance is lower than an i3 while being considerably larger than an i5.

Core i3 GT3 is 190mm2 at 22nm
Kaveri is 240mm2 at 28nm

Kaveri is faster at MT and iGPU, Core i3 is only faster at ST.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
You know whats funny ??? Core i3 4360 is at 3.7GHz, same as A10-7850K at heavy MultiThread workloads.

From the AT link

HandBrake v0.9.9 2x4K
Frames Per Second
A10-7850K = 10.22
Core i3 4360 = 11.21

Hybrid x265, 4K Video
Frames Per Second
A10-7850K = 0.76
Core i3 4360 = 0.58


x264 HD Benchmark - 1st pass - v3.03

Frames per Second - Higher is Better
A10-7850K = 79.56
Core i3 4360 = 82.87


x264 HD Benchmark - 2nd pass - v3.03
Frames per Second - Higher is Better
A10-7850K = 24.11
Core i3 4360 = 23.12

7-zip Benchmark
32MB Dictionary - Total MIPS - Higher is Better
A10-7850K = 11912
Core i3 4360 = 11528


POV-Ray 3.7 Beta RC4
Score (Higher is Better)
A10-7850K = 820
Core i3 4360 = 776

Now, Core i3 GT3 has less L3 Cache and lower frequencies than Core i3 4360.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Why dont you post all the heavy MT loads Kaveri loses? Besides all the 1-3 thread loads.

You simply came up with an untrue statement.

We could also point out Kaveri is 95W, while the i3 is 54W. Not to mention the i3 is cheaper.
 
Last edited:

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Why dont you post all the heavy MT loads Kaveri loses? Besides all the 1-3 thread loads.

You simply came up with an untrue statement.

We could also point out Kaveri is 95W, while the i3 is 54W.

The truth doesn't support his agenda.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Why dont you post all the heavy MT loads Kaveri loses? Besides all the 1-3 thread loads.

You simply came up with an untrue statement.

We could also point out Kaveri is 95W, while the i3 is 54W. Not to mention the i3 is cheaper.


The truth doesn't support his agenda.

I left the Intel optimized benchmarks out and only left the neutral ones, it is fair after all ;)

As i said earlier, Core i3 GT3 has less L3 Cache, lower Frequencies and Higher TDP than Core i3 4360.

Core i3 4360 may be $20 cheaper but it has way lower iGPU performance than A10-7850K.

So my statement was true.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
I left the Intel optimized benchmarks out and only left the neutral ones, it is fair after all ;)

As i said earlier, Core i3 GT3 has less L3 Cache, lower Frequencies and Higher TDP than Core i3 4360.

Core i3 4360 may be $20 cheaper but it has way lower iGPU performance than A10-7850K.

So my statement was true.

Utter rubbish again.
 

MisterLilBig

Senior member
Apr 15, 2014
291
0
76
Currently they seem to be the company which introduces new features for GPUs. (IE pixelsync.)

http://www.ubergizmo.com/2013/03/intel-pixelsync-order-independent-transparency/

"It is also fair to point out that other solutions to render comparably nice smoke or hair already exist for GPUs without AOIT, but they require more work and complexity."

"For all its advantages, Intel’s new AOIT still leaves a few things for game developers to mull over. For example it may not get along well with current anti-aliasing techniques. Also, objects that are rendered using a deferred lighting engine would not be compatible with this, since deferred lighting prevents Intel’s AOIT to know what the final color is, since it is computed in multiple render-passes. Many high-profile games use deferred rendering…"

PixelSync not exactly much of a win.
NV and AMD have brought more advances than Intel, since then. Was the Broadwell Gen8 iGPU announced with new features?


Kaveri was still faster than the GT3e in gaming. Not sure that will change with Broadwell tho.

EDIT: I would like Intel to push their iGPU's more!
 
Last edited:

USER8000

Golden Member
Jun 23, 2012
1,542
780
136
Core i3 GT3 is 190mm2 at 22nm
Kaveri is 240mm2 at 28nm

Kaveri is faster at MT and iGPU, Core i3 is only faster at ST.

The Iris Pro IGP section is around 160MM2 to 165MM2 in size plus 80MM2 of L4 cache.

The Kaveri IGP section is around 120MM2.
 

PPB

Golden Member
Jul 5, 2013
1,118
168
106
Yeah no deferred lighting is a big no-no.

No use for a feature that breaks compatibility with such an used (and abused) featured like the one mentioned above.