Ivybridge should match LLano in graphics

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Not really. The first one shows the 2600K pulling 57 extra Watts, the 2500K pulling 49 extra. That's compared to 58 Watts extra in the techreport review, which is at the same kind of level.

Whatever way you look at it, the HD 3000 is capable of pulling 60 Watts+ probably, while giving out less than 20 Watts gpu performance.

That difference between load and idle power also includes the CPU core, system memory, storage system, so its not the CPU. Just showing that its comparable with other graphics. Don't tell me you really thought that was the GPU only. If it was true, neither the previous generation HD Graphics or AMD's own HD4290 isn't that efficient either with 40W+ usage.

The SilentPC review shows its lower power when running CPU+GPU applications together, and Xbitlabs review shows its lower when strictly running graphics intensive apps.
 
Last edited:

jimbo75

Senior member
Mar 29, 2011
223
0
0
That difference between load and idle power also includes the CPU core, system memory, storage system, so its not the CPU. Just showing that its comparable with other graphics. Don't tell me you really thought that was the GPU only. If it was true, neither the previous generation HD Graphics or AMD's own HD4290 isn't that efficient either with 40W+ usage.

The SilentPC review shows its lower power when running CPU+GPU applications together, and Xbitlabs review shows its lower when strictly running graphics intensive apps.

Nobody said the previous gen HD graphics or the 4290 were efficient, did they?

And neither is the HD 3000. The techreport graph shows it pulls 2.5x the power of a 50% faster card, the 6450. I'm just gonna add that the 6450 also has it's own memory on board making it even worse.

power-idle.gif


power-load.gif


24W for the 6450 with it's own DDR3 vs 58W for the HD 3000. Don't even attempt to drum up SB's graphics as efficient. They make Fermi look like a architecture of genius.
 

jimbo75

Senior member
Mar 29, 2011
223
0
0
Nice try genius.

http://www.xbitlabs.com/articles/video/display/intel-hd-graphics-2000-3000_10.html

HD 3000: 54.3W from idle
HD 5450: 56.3W from idle(with 6.2W higher idle)

Subtracting system load from idle and claiming its representative of graphics power is shady, at best.

power-3.png


Yeah 4W more for an extra 400 mHz clock speed. That's trustworthy. :thumbsdown:

Or 5W more for double shaders at the same 1500mHz? Should be obvious that there is something wrong with that gpu burn test.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Yeah 4W more for an extra 400 mHz clock speed. That's trustworthy. :thumbsdown:

Or 5W more for double shaders at the same 1500mHz? Should be obvious that there is something wrong with that gpu burn test.

Oh yea I forgot, you were an engineer for power specifications on graphics cards.

http://www.behardware.com/articles/815-9/intel-core-i7-and-core-i5-lga-1155-sandy-bridge.html

13.7W running Furmark, which is about the most demanding for GPU.

Shaders are taking less than 40% of space for the HD 2000/3000, so even if you take it out entirely, you're talking 6W.

EDIT: http://www.techpowerup.com/reviews/Intel/Core_i5_2500K_GPU/9.html

2500K 2W lower in load and 10W lower in idle.
 
Last edited:

jimbo75

Senior member
Mar 29, 2011
223
0
0
Looks like throttling is in action to a large degree, which wouldn't exactly be a surprise running Furmark (or any graphics benchmark). Without seeing the FPS it did in comparison to a 6450 it's a pointless comparison.

If the graphics themselves aren't pulling a lot of power, the combination of cpu+gpu certainly is. You can't argue with the Bulletstorm power draw vs fps, it's showing how non-efficient the graphics are.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Looks like throttling is in action to a large degree, which wouldn't exactly be a surprise running Furmark (or any graphics benchmark). Without seeing the FPS it did in comparison to a 6450 it's a pointless comparison.

It looks like the French version of the site shows the image properly: http://www.hardware.fr/articles/816-4/intel-core-i3-2100-lga-1155-sandy-bridge-dual-core.html

"In the case of the Core i3 2100, the energy consumption in H.A.W.X alone is around 6 watts on a fixed game scene"

6W for HD2000

"In a similar scenario with the Core i7 2600K, the dip was from around 10 watts in IGP"

10W for HD3000 for a graphics load, which drops to 5W, or even low as 3W when CPU load is added, so there's the throttling right there. And with only 2 cores loaded, it beats the HD5450 by 18%.

That's the reason why Starcraft 2 performs relatively worse on the Sandy Bridge graphics, because its a very CPU demanding game. But in rest of the games that are mostly GPU intensive it won't throttle.
 
Last edited:

jimbo75

Senior member
Mar 29, 2011
223
0
0

It's 10W lower in idle because it doesn't have to cope with another lot of RAM on the 5450.

It's 7% slower than the 5450 according to TPU.

perfrel.gif



10W lower at idle, 2W lower at 3dmark load. That's only 35W difference (101W load - 66W idle) so it's clear that 3dmark06 isn't really stressing the system (we've seen almost 60W load differences in other benchmarks). But the 5450, which isn't exactly renowned for it's stunning efficiency, is only pulling 27W difference *including* the extra RAM overhead.

power.gif



If it weren't for the RAM on the 5450, it would be twice as efficient. This is vs the slower 2500K remember - efficiency is only going to get worse with the 2600K's higher clocks.
 
Last edited:

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
Honestly who cares about how effecient intel graphics are?

it doesnt matter with the small IGPs... as long as they dont try to make a 590/6990 card, it wont matter to much.

The only place it ll matter is in the mobile laptops ect.

When Llano get reviewed, we ll see which performs best pr watt used.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,225
126
10W for HD3000 for a graphics load, which drops to 5W, or even low as 3W when CPU load is added, so there's the throttling right there. And with only 2 cores loaded, it beats the HD5450 by 18%.

That's the reason why Starcraft 2 performs relatively worse on the Sandy Bridge graphics, because its a very CPU demanding game. But in rest of the games that are mostly GPU intensive it won't throttle.

Wait, so you are telling me, that in a CPU-intensive game, the GPU throttles? That makes that video comparing SB graphics to LLano all that more believable then. It showed SB graphics skipping.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Wait, so you are telling me, that in a CPU-intensive game, the GPU throttles? That makes that video comparing SB graphics to LLano all that more believable then. It showed SB graphics skipping.

Yea, that's the result of TDP sharing with graphics. But the skipping in Final Fantasy is another thing altogether.
 

tweakboy

Diamond Member
Jan 3, 2010
9,517
2
81
www.hammiestudios.com
Gotta disable that on die GPU ,, and use your video card. MUCH MUCH faster

The onboard GPU for both vendors are what onboard sound is to mobos. Your not gonna use that crummy GPU , disable it in BIOS.

Use your video card... thx
:thumbsup:D:
 

Topweasel

Diamond Member
Oct 19, 2000
5,437
1,659
136
Gotta disable that on die GPU ,, and use your video card. MUCH MUCH faster

The onboard GPU for both vendors are what onboard sound is to mobos. Your not gonna use that crummy GPU , disable it in BIOS.

Use your video card... thx
:thumbsup:D:

That's very retarded statement. For starters as CPU's have gotten faster and features on mobo based audio got better, 98% of the people out there are running off of onboard sound.

There are tons of reasons this is has value. First using the integrated video for 2D loads which will increase the life of the Video adapter. Use in HTPC's where the feature set and decoding features matter more then its 3D gaming capabilities. Use in laptops where getting a discrete video option usually takes a Laptop up to 1k, this is great for all of us because it brings the bare minimum up for everyone, meaning some programs are less held back (like WoW and SC2).

Forgot the most important part. Hybrid Crossfire. Its not the greatest GPU but if it helps the discrete card then why would you want to disable it?
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
That's very retarded statement. For starters as CPU's have gotten faster and features on mobo based audio got better, 98% of the people out there are running off of onboard sound.

Actually tweak's statement is not too far off. Even if 98% of people use onboard audio, it's miles and miles off what a dedicated sound card can throw out. The difference between my P55 audio and the X-FI Platinum is staggering. I would never use onboard audio again. Most people who continue to claim that onboard audio is just as good clearly haven't used a good dedicated sound card. I am not even talking about Asus Xonar series which are even better than X-FI.

There are tons of reasons this is has value. First using the integrated video for 2D loads which will increase the life of the Video adapter.

Do you have evidence to support your view? You buy a videocard to use it, not to "preserve" it as a fossil for 1000 years. In 5 years, a $50 videocard will be faster than a $500 videocard you purchase today. So trying to curb the usage of your main videocard to "preserve" its life is a ridiculous proposition. This is especially so since a lot of brands offer 3 year and even lifetime warranties.

Use in HTPC's where the feature set and decoding features matter more then its 3D gaming capabilities. Use in laptops where getting a discrete video option usually takes a Laptop up to 1k, this is great for all of us because it brings the bare minimum up for everyone, meaning some programs are less held back (like WoW and SC2).

Those points are valid. Although, Llano will do a lot more than SB from this point of view. You can't play modern games on SB at modern 1080P resolution, end of story. Even if it's 3x faster than previous Intel graphics, it's still borderline useless. Again, most people will care about battery life for laptops. This is where Intel wins and it's still going to be a hard sell for Llano in that regard. Considering consumers have been buying laptops with Intel craphics for 5+ years, it's hard to imagine that the majority even care about the graphics power in their laptop.

Forgot the most important part. Hybrid Crossfire. Its not the greatest GPU but if it helps the discrete card then why would you want to disable it?

That will work great for Llano and its 400 SPs. But then you have to get the slower AMD processor.....sounds like a compromising proposition. I agree though, HB CF will become very cool once onboard GPUs will get even faster.
 
Last edited:

formulav8

Diamond Member
Sep 18, 2000
7,004
523
126
Oh yeah AMD's cpu's are just so slow and can't do anything properly or fast enough. :thumbsdown:

What task will general people do on a computer/laptop that the AMD cpu can't do? Solitare?, Webbrowsing?, Watch Video's/HD?, Encode a MP3?

I forgot AMD's cpu will take way, way longer to encode a movie! Plus you will lose 10 hours battery life compared a Intel cpu.

Anyways, llano should have no problems doing what most people will need a computer to do...
 
Last edited:

grimpr

Golden Member
Aug 21, 2007
1,095
7
81
Some information for Desktop Llano APUs Socket FM1 from System Monitor.exe.config in AMDs System Monitor 1.0.5.

<!-- Desktop APU Devices -->
<GPU RadeonHDModel="HD6370D" DeviceId="9642" />
<GPU RadeonHDModel="HD6410D" DeviceId="9644" />
<GPU RadeonHDModel="HD6530D" DeviceId="964A" />
<GPU RadeonHDModel="HD6550D" DeviceId="9640" />


http://pastebin.ca/2046620
 

psoomah

Senior member
May 13, 2010
416
0
0
Ivy Bridge won't be competing with Llano, it'll be competing with Trinity, which will have NEXT generation Bulldozer CPU cores. Since Llano, Bulldozer, Bulldozer II and Trinity will all be fabricated on the same GloFo process node, and AMD is currently shipping Llano (and very soon Bulldozer) on that node, one might expect Trinity to be coming sooner than later in 2012. Bulldozer II also for that matter. No 28nm node holdup on these products.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Oh yeah AMD's cpu's are just so slow and can't do anything properly or fast enough. :thumbsdown:

What task will general people do on a computer/laptop that the AMD cpu can't do? Solitare?, Webbrowsing?, Watch Video's/HD?, Encode a MP3?

That's not how the consumer looks at it. Sure AMD CPUs are fast enough for modern tasks, but with Intel, you get faster CPUs which consume less power too. So what exactly would entice you to get an AMD system other than lower price?
 

Blitzvogel

Platinum Member
Oct 17, 2010
2,012
23
81
I'll go ahead and predict 4x the performance. Sandy Bridge trades blows with the 5450 (with 80 SPs). Ivy Bridge will at best double the performance set by Sandy Bridge I predict, though I think it will be more like 50 percent.
 

Blitzvogel

Platinum Member
Oct 17, 2010
2,012
23
81
^

The 5450 has its own memory controller and cache. The intel one shares all that. So you can't compare on a 1 to 1 design like that.

I would assume that the figure doesn't include video decode hardware either, unlike the 5450. That's a good 100 million transistor chunk right there, then you have the added transistor cost of DX11 and OpenCL capability.
 

Blitzvogel

Platinum Member
Oct 17, 2010
2,012
23
81
^

The 5450 has its own memory controller and cache. The intel one shares all that. So you can't compare on a 1 to 1 design like that.

I would assume that the figure doesn't include video decode hardware either, unlike the 5450. That's a good 100 million transistor chunk right there.

The Radeon 3450 is comprised of 182 million transistors with 40 SPs + 4 TMUs + 4 ROPs and is DX10.1 capable. The 4550 has 80 SPs + 8 TMUs + 4 ROPs, and it's transistor count is 242m (only 60m increase to double the SPs + TMUs and remain DX10.1). The Radeon 5450 has 80 SPs + 8 TMUs + 4 ROPs like the 4550, and the same relative per clock performance, but to be DX11 capable (and I guess OpenCL too), it has a massive increase of 50m transistors. I'm sure there were increases in video decode size among other things, but clearly, AMD has their transistor per transistor performance down in comparison to Intel when it comes to actual 3D performance.

The Radeon 64xx series increases the count to 370 million, with 160 SPs + 8 TMUs + 4 ROPs with DX11 capability, an only 90m increase for double SP performance. Fascinating considering the 26xx series was 390m transistors, and the 36xx GPUs were 378m transistors, both being 120:8:4 configurations. AMD managed to reduce the transistor numbers for the same, if not bit better performance + DX10.1 capability when going to the 3000 series. IIRC the 4xxx series introduced some change in ROP design too (not too sure really). Basically AMD has their head in the game, uses less power to do it compared to Nvidia and Intel. If only they were as good as Nvidia at drivers, though they are very good now.

Finally, it's crazy on Intel's part to ramp their IGP so high in speed, running twice as fast as most AMD GPUs at it's max. Doesn't Intel understand that energy needs go up exponentially as you increase clock on a linear scale (like aerodynamic drag)? Of course they do, but most customers are not going to notice the thermal increase or measure their CPU energy use on the electric bill, but it could decrease the life and durability of the processor if the standard cooling wasn't designed to take it on. Too bad it's policed by the TDP output in comparison to how much the CPU is being used.
 
Last edited:

Tuna-Fish

Golden Member
Mar 4, 2011
1,668
2,540
136
Finally, it's crazy on Intel's part to ramp their IGP so high in speed, running twice as fast as most AMD GPUs at it's max.

The clockspeeds depend very much on the process they run -- the Intel 32nm bulk is the fastest process in the world right now, while the TSMC 40nm that AMD and NV GPU's run on is much more conservative.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Doesn't Intel understand that energy needs go up exponentially as you increase clock on a linear scale (like aerodynamic drag)?

Not at all.

Energy needs go up linearly with frequency, unless you are adding voltage. But voltage doesn't always increase with frequency, voltage is constant within a family. A 1GHz SKU might have same voltage as a 1.5GHz one.

Plus, there are process differences. A x process allows 1GHz max frequency @ 1.1V, and y process allows 1.6GHz frequency at same 1.1V, the power difference in this case would be only linear, not exponential as you wrongly stated.

And for those that are wondering, if you think you can just halve the voltage(which results in 1/4x the power), halve the frequency, and increase transistors by 4x to achieve 2x performance at same power works, think again. Voltage scaling(lowering voltage to lower power usage) with new process tech is slowed down significantly, so you'll hit a hard wall if trying to reduce power by reducing voltage.