ivy bridge engineering sample igpu benchmarks

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

nategator

Junior Member
Sep 3, 2011
16
0
0
You just proved my point.that company is not buyin ivy bridge for its gpu performance.its buying it for its cpu performance and power efficiency.

And power efficiency is based in part on having a decent igpu so you don't have the power drain with discrete graphics.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,657
136
You just proved my point.that company is not buyin ivy bridge for its gpu performance.its buying it for its cpu performance and power efficiency.

Come for the coffee stay for the entertainment.

Few people will build a computer with an IGP solely for its IGP performance. But there are hundreds of use cases where it can make all the difference in the world. A Sims 3 computer for the wifey, an HTPC, an arcade box, an Ultrabook. The point is with most of the market out there most of the rest of the systems performance and its footprint matter more the GPU performance. That said anything that makes the expectation of equipment average out higher for lowest common denominator the better our games will be in the future. The more people who have IB/Llano/Trinity IGPs, the better for people who want games that actually use the new $500 video card. Especially with Ultrabooks being the new fad, and Windows 8 tablets on the horizon.

Me, I am looking forward to a GPUless gaming system I can take to my friends as I build a HAF X behemoth. Next year I want game-able graphics in a $700 Ultra-book, that I can tie into work. But again its about bringing up the low end because the things that matter get designed around that.
 

BD231

Lifer
Feb 26, 2001
10,568
138
106
That kind of defeats the whole purpose, APU is about higher integration and lower cost, not adding in extra discrete components to your system.

The interesting thing is if you look at Sandy Bridge iGPU performance I don't think it's affected much/any by memory speed, beyond 1333 it doesn't scale nearly as much as Llano. Probably helps a ton that the graphics can communicate with the CPU via the ring bus whereas with Llano communication between the CPU and GPU has to be done through much slower and higher latency main memory. It's like the old MCM Intel dual cores where the separate cores had to communicate via a slow FSB, just seems like an enormous bottleneck. Llano has the better GPU, but ironically Intel has done a better job of integrating the GPU with the CPU, whereas Llano is more of a GPU pasted onto the CPU and not really well integrated with the CPU yet. When they get around to implementing a crossbar switch or ring bus that connects the CPU and GPU it should help, maybe then you won't need 1866 or higher memory to get very good performance from the APUs.

You're right, AMD really did butcher the stars core just to implement but I didn't see any of that on the Intel side. Good info. Catching up within the first few product generations of a prominent gpu company hyping the future is fusion is still shocking.

Higher integration and lower cost yes but you forgot better performance which people seem to be pretty adamant about. Literally every FM1 board has a PCI express slot and no smart person would buy Llano just to throw in an add in gpu. Might as well put the wasted slot to use I say.
 

Arzachel

Senior member
Apr 7, 2011
903
76
91
You're right, AMD really did butcher the stars core just to implement but I didn't see any of that on the Intel side. Good info. Catching up within the first few product generations of a prominent gpu company hyping the future is fusion is still shocking.

Higher integration and lower cost yes but you forgot better performance which people seem to be pretty adamant about. Literally every FM1 board has a PCI express slot and no smart person would buy Llano just to throw in an add in gpu. Might as well put the wasted slot to use I say.

I recall some issues with cache trashing due to SB's IGP and it isn't powerful enough to shift the bottleneck to memory, so I wouldn't necessarily say Intel's implementation is better.
 

BD231

Lifer
Feb 26, 2001
10,568
138
106
I recall some issues with cache trashing due to SB's IGP and it isn't powerful enough to shift the bottleneck to memory, so I wouldn't necessarily say Intel's implementation is better.

Just glad their CPU's didn't get stripped down in the process.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,786
136
I recall some issues with cache trashing due to SB's IGP and it isn't powerful enough to shift the bottleneck to memory, so I wouldn't necessarily say Intel's implementation is better.

This is likely not much as cache thrashing as is lowering clocks when the CPU and the GPU is maxed out.

We usually don't think the Base clocks are of relevance on Sandy Bridge because in most cases, we don't encounter the scenario where both of them are used to the full extent. But when we stress both, the CPU and GPU is forced to go back to Base speeds from Turbo speeds.

I also believe in the case of the GPU, it doesn't merely go back down to base clocks, but can disable certain execution units to further save power, like bringing amount of EUs from 12 to 6. You can see the impact on the CPU is small but is large for iGPU: http://www.behardware.com/articles/815-10/intel-core-i7-and-core-i5-lga-1155-sandy-bridge.html
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,786
136
I ran few benchmarks while running HWINFO32. The latter might not be accurate but gives general sense of what's going on.

Programs used:
-LinX
-Cinebench R11.5
-FurMark
-HWINFO32

Cinebench + Furmark: The Cinebench score impact of running alongside Furmark is minimal. My normal score is 6.7 points, I got 6.5 and 6.6 points. HWINFO32 showed 16W for GPU, and the CPU is at 60W and rises 2-3W during the benchmark process.

Cinebench alone:The results are 6.7 points. CPU uses 60-64W.

Furmark alone: GPU used 14-15W

Linx alone: Most CPU demanding of programs, it alone uses close to 80W, and CPU fan starts spinning faster. Got 85GFlops.

Linx + Furmark: CPU runs at 74-77W, and the results are not much lower at 81GFlops. But the interesting thing is the iGPU. It uses 14-16W in the beginning, but few seconds later(probably when LinX starts doing computations), it drops to under 8W. Did 2 runs, and between runs the iGPU goes back to 14-16W then it drops back to 7.5W as computation starts. CPU fan spins faster than normal.


Conclusion:
It looks like if anything, the power management system favors CPU over iGPU. Significant throttling is observed on the iGPU when both CPU and GPU intensive applications are run. The CPU performance impact is minimal. It's likely the CPU merely goes back to Base clocks.

On Ivy Bridge, the iGPU has a dedicated L3 cache. The primary purpose of that is to reduce power usage by not firing up the ring bus and the CPU's L3 cache, and that might help graphics performance when running CPU + GPU intensive applications.
 
Last edited:

Khato

Golden Member
Jul 15, 2001
1,240
309
136
I recall some issues with cache trashing due to SB's IGP and it isn't powerful enough to shift the bottleneck to memory, so I wouldn't necessarily say Intel's implementation is better.

It'll be quite interesting to see how IVB performs with varying memory bandwidth as it definitely made pretty much zero difference with SNB and hence as you state, can't really tell whether the shared L3 cache helps much or not. I did find it interesting to quickly do some 3dmark vantage performance preset runs with varying memory bandwidth on an i7 2600k - going from dual channel 1333 to single channel 1333 only drops the GPU score from 1705 to 1510, and then going down to single channel 1066 results in a GPU score of 1379.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,786
136

I'm not sure about trusting Tweaktown. Last time I looked at their iGPU benchmark, the GMA 4500 graphics were performing at an insane level. It couldn't have been true. Something like Crysis Medium settings at 40 fps.

BTW, for the bolded part, the Tweaktown link has the overclocked A8-3850 with 3.48GHz CPU and 1920MHz DDR RAM clock.
 
Aug 11, 2008
10,451
642
126
100% wrong

I went laptop shopping for my brother a few months ago and he needed a laptop to go off to school.

as we were waiting everyine was talking about specs and I over heard at least 4 people say that want at least 1-2gb video card and another kid was complaining to get the 100 dollar more expensive laptop for the better gpu card.

I dont know a single person that would want a gpu that is worse than what we had 5 years ago in a pc.

Do you think the guy paying 3500 for a maxed out mac book pro wants an Igpu?

and yes the majority of people that want an ivy dont care about the gpu.That is the reason why intel made an unlockled version,there is DEMAND for it.If it was like you say they would all be locked up and we would get what we get.

On die gpus suck and they can barely play n64 emulators,they SUCK!!! and ivy is going up against trinity? really?I thought ivy would go up against pildriver(enless trinity is the igpu in pilldriver)as I really dont follow igpus at all since they suck and cant play anything that looks better than a psp.

How exaclty can you buy and use a cpu with an intergrated cpu that can be beat to hell buy a sub 100 dollar real gpu?

I have my 8800gt for sale for 125 if you want a serious upgrade to your A8gpu...heck I feel so bad for you that Id even sell it for 100 shipped if you want it.

125.00 for an 8800GT??? I got one brand new for 85.00 18 months ago.
 

Khato

Golden Member
Jul 15, 2001
1,240
309
136
I'm not sure about trusting Tweaktown. Last time I looked at their iGPU benchmark, the GMA 4500 graphics were performing at an insane level. It couldn't have been true. Something like Crysis Medium settings at 40 fps.

BTW, for the bolded part, the Tweaktown link has the overclocked A8-3850 with 3.48GHz CPU and 1920MHz DDR RAM clock.

Eh, only reason I linked it was that they're the only review that came up in a quick search for A8 3850 street fighter IV benchmark and I wanted to provide another source of numbers than the foreign site that AtenRa linked. Even if Tweaktown is odd on their numbers, they likely use similar settings between systems, which would give the relative performance of the i7 2600k and A8 3850 (they have both stock and overclocked results, I was referencing the stock) and that relative performance can then be scaled to the numbers obtained by expreview for their i5 2500k in order to get a rough idea of how the A8 3850 performs compared to IVB. aka, the entire point of the exercise was to provide a small amount of evidence that taking benchmarks from one random review and comparing to another isn't exactly accurate.
 

sm625

Diamond Member
May 6, 2011
8,172
137
106
125.00 for an 8800GT??? I got one brand new for 85.00 18 months ago.

Yeah that is serious lol. I have bought two HD4850's off ebay in the last few months. Both are whisper quiet, faster than a 8800GT, use less power, and costed $35 each. There really isnt much logic in anyone using an igp for light gaming. But hey, each to his own. It just means I get my stuff cheaper...
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
Yeah that is serious lol. I have bought two HD4850's off ebay in the last few months. Both are whisper quiet, faster than a 8800GT, use less power, and costed $35 each. There really isnt much logic in anyone using an igp for light gaming. But hey, each to his own. It just means I get my stuff cheaper...

$35 for a 4850 is a great deal.