Ivybridge should match LLano in graphics

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
It may not matter to you but majority of PC users out there doesn't use a dedicated video card also an increase in performance in IGP would probably mean an increase in performance of lower end dedicated GPUs.

What would the majority fof people use this power for?
Gaming?
Now I have to laugh.
Excel dosn't care.

An increase in GPU performance would be god send for laptop users as well. Sure there are also dedicated graphics card for laptops but those things cuts your battery life.

This get more and more funny by the post.
For what?
"competitive" gaming? :D
 

jimbo75

Senior member
Mar 29, 2011
223
0
0
You do realise it meant competitive gaming comparison between the chips in the graph right?
 

zebrax2

Senior member
Nov 18, 2007
977
69
91
What would the majority fof people use this power for?
Gaming?
Now I have to laugh.
Excel dosn't care.

Accelerated flash, HD video viewing, MMO games, Casual games, Semi old/New games.


This get more and more funny by the post.
For what?
"competitive" gaming? :D

I don't really care about that quote as we don't even know if that slide is legit. My post was made to respond to you downplaying the value of the IGP
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
You do realise it meant competitive gaming comparison between the chips in the graph right?

That graph does not show correctly the zacata vs HD 2000 hd 3000 performance differances that we know is fact as reviews have showed us . So that graph is a joke if thats wrong its all wrong .
 

Abwx

Lifer
Apr 2, 2011
11,885
4,873
136
Decreasing the operating voltage from 1V to 0.78V would be expected to reduce power-consumption by 50% if everything else was held constant (i.e. you reduced the voltage on a chip from the same node).



So we'd expect the lower-voltage chip to consume only 60% of the power that the higher-voltage chip consumes when both chips are clocked the same and produced on the same node.

The graph compare a 32nm planar with a 22nm trigate,
so this is a comparison between a 1V VDD 32nm with
a 0.78V VDD 22nm trigate.
So the 60% figure has relelevance to this comparison,
not a same node comparison..

However, decreasing the operating voltage AND shrinking the elements (a further reduction in capacitance) decreases power-consumption even further because the capacitance is also decreasing.
Shrink to 22nm is already included in the trigate intrinsical parameters,
displyed by the graph why the hell are you adding a further shrinking..??...


Reading the link you mention, it appears that :
Parasistic capacitances are not reduced over the ratio
that a simple shrink would have provided with planar tech.

Higher speed is primarly due to higher transconductance
of the Fets that allow to switch on/off the said capacitances faster,
as generaly, capacitance reduction by shrink is partly balanced
by lower transconductance of the shrinked Fets..

Also, seems that they reduced the Fets Vth ( gate source voltage
threshold for device conduction) that help the Fets being more
responsive.

All in all, i dont expect better than 30% power consumption
reduction at normal speeds.
It might be that Intel, extending its Thermal Budget concept
will use the low power modes reductions in TDP to allow
the CPU to overclock well beyond its rated max TDP,
average TDP over a defined time being the new TDP scheming..
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
What would the majority fof people use this power for?
Gaming?
Now I have to laugh.
Excel dosn't care.

Isnt "Office 2011" gonna be GPGPU accelerated?
In that case,... Excel would care about your GPU.
 

jimbo75

Senior member
Mar 29, 2011
223
0
0
That graph does not show correctly the zacata vs HD 2000 hd 3000 performance differances that we know is fact as reviews have showed us . So that graph is a joke if thats wrong its all wrong .

You are assuming that it's being run at intel favouring settings like here on anandtech. Why would AMD do that in an AMD slide? AMD is not going to show benchmarks on the lowest of the low settings just so intel can make their IGP look twice as good as it really is due to a fast cpu buffing the result.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
You are assuming that it's being run at intel favouring settings like here on anandtech. Why would AMD do that in an AMD slide? AMD is not going to show benchmarks on the lowest of the low settings just so intel can make their IGP look twice as good as it really is due to a fast cpu buffing the result.

Wow, you just claimed AMD CPU's aren't fast.

But I'd really like to know how you think that a "fast" CPU could make a difference in a GPU limited situation, as any IGP is going to be.

Links to support your position would be good.
 

jimbo75

Senior member
Mar 29, 2011
223
0
0
Wow, you just claimed AMD CPU's aren't fast.

Not at all. I would just expect AMD to use settings that show their apu's in the best light. Any reason to think otherwise?

But I'd really like to know how you think that a "fast" CPU could make a difference in a GPU limited situation, as any IGP is going to be.

Links to support your position would be good.
Pairing Zacate with a faster graphics cards does almost nothing. The reason being, it's cpu bottlenecked in most cases - not gpu. Intel cpu's are the complete opposite, however SB can drive the igp much further at very low settings and resolutions.

gpuscaling_sm.jpg
 

jimbo75

Senior member
Mar 29, 2011
223
0
0
What do you want to know? What happens to the intel IGP when it is really shader limited?

This is what happens.

Our CPU test actually ends up being GPU bound with Intel's integrated graphics, AMD's 890GX is actually faster here:
34877.png



http://www.anandtech.com/show/4083/...core-i7-2600k-i5-2500k-core-i3-2100-tested/11

The HD 3000 and HD 2000 trades blows with the Hd 5450, which has less than half the bandwidth. That's the only reason it loses at all, in the games where bandwidth isn't such an issue it wins.

Intel shaders are absolute garbage and will choke any time they are asked to do anything stressful. Even just going up a setting would cause the HD 3000 to choke so badly it would lose to the ancient 790GX. That's the reason why you only see lowest settings.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
What do you want to know? What happens to the intel IGP when it is really shader limited?

No, I want you to backup this claim:

intel can make their IGP look twice as good as it really is due to a fast cpu buffing the result

So far you haven't shown that a current Intel IGP scales with CPU speed. Let alone by 200%.
 
Last edited:

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
I'm not sure gaming was at all the focus of on-chip "video." Sophisticated unified shader architectures were integrated to make the CPU better in terms of the jobs most people suspect are done by CPUs, which is to organize and process our crap, not render 3D assets in a game. Not everyone plays games on their machines, but just about everyone produces, distributes, and digests rich content on their machines. Whether it is rendering a PDF or a movie or a GUI, transcoding a blu-ray rip down to your phone/tablet, or looking at chatroulette.com, much of that is done better and with less power by a GPU than a CPU.

Even in an alternate universe where no one plays games, this exact same integration trend would occur inevitably and in about the same amount of time. What do we call the memory controller now that it is integrated? The uncore. It's like it's not even a discrete device anymore; it's simply part of the CPU, but not inside any cores, and we call it something else. This doesn't mean we will stop calling IGPs IGPs, but by becoming integrated with the integer cores, sharing cache with them and splitting work with them, you can say the GPU is just as much a part of a CPU as would be the memory and i/o hubs.
 
Last edited:

jimbo75

Senior member
Mar 29, 2011
223
0
0
So far you haven't shown that a current Intel IGP scales with CPU speed. Let alone by 200%.

The HD 3000 is slower than the 5450, or slower than 80 evergreen sp's. How do I know this? Well it's really simple.

The 5450 has 12.8 GB/s bandwidth, and was paired with a 2500K.
The 2600K has more than double the bandwidth and slightly faster clocks.

Ergo, there is no possibility of the 2600K losing if the shaders are faster, right?

Yet it loses half of the benchmarks to the 5450 + 2500K.
http://www.anandtech.com/show/4083/...core-i7-2600k-i5-2500k-core-i3-2100-tested/11

Now we've established that 80 evergreen SP's are faster than the HD 3000 shaders, we also know that Zacate is faster except in cpu and memory bandwidth limited games.

For SB to beat Zacate means that the game is cpu or bandwidth limited, in Zacate's case it will almost certainly be cpu limited. That is the only reason it loses to the 2600K at all.

We've seen the difference bandwidth makes in games between the 5450 and 2600K.

For example - Anand's favourite "make intel graphics look good" game.

34870.png


That's what almost 40% faster in favour of the 2600K?

But wait, what happens here?

34878.png


Now it's 40% slower?

The answer is obvious. The HD3000 is a mutt which can be made to look better at very low settings, very low resolution. In Dragon Age it's down to a severe bandwith limitation on the 5450 and nothing else.

Simply by increasing the resolution or upping the settings a little would take the 5450 above the 2600K in most games, when the bottleneck actually shifts to the HD3000 graphics.

That is why in that AMD slide, Zacate is much closer to the 2500K than is shown in other benchmarks. AMD has tested at above absolute minimum settings, which is their right to do so as Llano is clearly capable of gaming.

That was my first point, to address Nemesis. My second point is on IB matching Llano - something it will not do in a month of Sundays and the reason for that is, Llano is a "medium settings" apu while IB and in fact any intel cpu+igp will forever be a "lowest possible settings" chip. You can look forward to similar performance at low settings, Llano might as well be a 6990 in comparison though that's how far ahead it will be.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
So what you have claimed (again) is Anand is Intel biased, and demonstrated that different games perform differently on different hardware.

You still haven't demonstrated 200% scaling with an Intel IGP that only "appears" to running that fast. You haven't shown a single CPU scaling chart at all.

intel can make their IGP look twice as good as it really is due to a fast cpu buffing the result
 
Last edited:

podspi

Golden Member
Jan 11, 2011
1,982
102
106
So far you haven't shown that a current Intel IGP scales with CPU speed. Let alone by 200%.

If you take a look at Anand's Sandy Bridge review, almost all of the gaming tests are done at 1028x768 and low settings. That's fine (he does the same for Bobcat), but I think Jimbo75's point is that at low settings the benchmarks put relatively more weight on the CPU (which is true), and that at higher settings Intel's IGP probably performs a lot worse.


It definitely is true that the difference in gaming performance between the higher-end CPUs in most benchmarks is because lower settings are used. It does look to me like SB's IGP is GPU-bottlenecked, though (since adding a discrete card increases performance, even at the same setting level). Anand actually touches on this here:

http://www.anandtech.com/show/4083/...core-i7-2600k-i5-2500k-core-i3-2100-tested/12


It would be interesting if he did a more in-depth look at this with IQ. Llano doesn't have to be much faster than IB's GPU to be a better choice, though. Hybrid crossfire with the APU is, imho, a game changer. Llano, and later Trinity, are going to be the CPUs to get for CHEAP gaming systems. Most other users aren't going to care one way or the other.

Moving forward, for most users GPU compute is going to be more important than low-res, low-settings gaming. And we have no idea how IB (or LLano, or Trinity) is going to be in compute performance. At least I don't.
 

jimbo75

Senior member
Mar 29, 2011
223
0
0
You still haven't demonstrated 200% scaling with an Intel IGP that only "appears" to running that fast. You haven't shown a single CPU scaling chart at all.

That's because they don't exist. If I had a 2600K and Zacate I could very easily show Zacate beating it in many, many games. Then again, I know for sure that I'm not getting paid by intel to ensure that doesn't happen.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
If you take a look at Anand's Sandy Bridge review, almost all of the gaming tests are done at 1028x768 and low settings. That's fine (he does the same for Bobcat), but I think Jimbo75's point is that at low settings the benchmarks put relatively more weight on the CPU (which is true), and that at higher settings Intel's IGP probably performs a lot worse.


It definitely is true that the difference in gaming performance between the higher-end CPUs in most benchmarks is because lower settings are used. It does look to me like SB's IGP is GPU-bottlenecked, though (since adding a discrete card increases performance, even at the same setting level). Anand actually touches on this here:

http://www.anandtech.com/show/4083/...core-i7-2600k-i5-2500k-core-i3-2100-tested/12


It would be interesting if he did a more in-depth look at this with IQ. Llano doesn't have to be much faster than IB's GPU to be a better choice, though. Hybrid crossfire with the APU is, imho, a game changer. Llano, and later Trinity, are going to be the CPUs to get for CHEAP gaming systems. Most other users aren't going to care one way or the other.

Moving forward, for most users GPU compute is going to be more important than low-res, low-settings gaming. And we have no idea how IB (or LLano, or Trinity) is going to be in compute performance. At least I don't.


I agree with everything you say. I will be shocked if Llano doesn't have a faster / better IGP than SB.

I would expect that any IGP is going to be GPU bottle-necked, which means that the CPU wouldn't matter.

Jimbo claims that Intel - or review sites - somehow benchmark games that allow the CPU to overcome IGP shortcomings. That would mean that as CPU speed increased, so should the benchmark performance - IE overclock the CPU by 30% and the benchmark (game) performance should also increase by 30%. We all know that doesn't happen with an IGP.

Let alone the 200% that Jimbo claims Intel somehow "cheats" by.
 

jimbo75

Senior member
Mar 29, 2011
223
0
0
I pointed out that the intel chip can be made to look much better than it really is because of certain settings.

Do you really want proof of how much a cpu makes a difference at lowest quality settings?

http://techreport.com/articles.x/20401/16

f1-2010-igp.gif


Now either the HD3000 is 4x faster than the gtx 460, or there is a real cpu bound issue going on here. Whats your take on it?

If the settings were changed to Ultra High, Zacate would probably still be giving 12 fps while the 2500K plummeted to single figures.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
That's because they don't exist. If I had a 2600K and Zacate I could very easily show Zacate beating it in many, many games. Then again, I know for sure that I'm not getting paid by intel to ensure that doesn't happen.

Oh, okay, so you can't prove your accusation. I didn't think you could.

BTW, this thread isn't about Zacate.

Now you are inferring that Anand is getting paid by Intel to spin / falsify benchmarks. It's hilarious how the most rabid fanboys always fall back to this conspiracy. Care to back it up? Show me one - just one - post by an AMD employee accusing Anand of this.

I can show you many posts of AMD employees defending Anand when baseless accusations like this are made.


I have one other question for you, if you don't mind answering - Why are you here? I imagine you would feel much more comfortable reading / interacting at a site that reflects your views, and not one that has supposedly been bought off by Intel.
 

jimbo75

Senior member
Mar 29, 2011
223
0
0
Oh, okay, so you can't prove your accusation. I didn't think you could.

I think I did and more.

BTW, this thread isn't about Zacate.
Tell your mate who brought it up? http://forums.anandtech.com/showpost.php?p=31696932&postcount=204

<snip>

I have one other question for you, if you don't mind answering - Why are you here? I imagine you would feel much more comfortable reading / interacting at a site that reflects your views, and not one that has supposedly been bought off by Intel.
Feeling uncomfortable? Good.
 

Martimus

Diamond Member
Apr 24, 2007
4,490
157
106
Oh, okay, so you can't prove your accusation. I didn't think you could.

BTW, this thread isn't about Zacate.

Now you are inferring that Anand is getting paid by Intel to spin / falsify benchmarks. It's hilarious how the most rabid fanboys always fall back to this conspiracy. Care to back it up? Show me one - just one - post by an AMD employee accusing Anand of this.

I can show you many posts of AMD employees defending Anand when baseless accusations like this are made.


I have one other question for you, if you don't mind answering - Why are you here? I imagine you would feel much more comfortable reading / interacting at a site that reflects your views, and not one that has supposedly been bought off by Intel.

I didn't read into this posters statements that he was inferring any of the above. Perhaps you could link some proof of your accusations, so the rest of us can understand what you are talking about?

He has posted references that show the bottleneck can be the CPU at low resolutions and graphical detail. Which is all it seems he was trying to portray. It is definitely possible that the CPU is the bottleneck in some of these games at very low resolution and graphical detail, which will make the better CPU shine in those situations.

Now, if you are actually playing that game at that very low resolution and graphical detail, then the better CPU would definitely be the better choice. However, if you are using a higher resolution or better graphical detail, then it is possible that the stronger CPU (with weeker GPU) will no longer be the best choice.
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
Anandtech does like to show off intel in a good light, but that doesnt mean hes falsifying benchmarks, or getting payed off by intel.

I dont think anyone implied they do that, Phynaz thats just you putting words into Jimbo's mouth.

All reviewers have abit of bias one way or another, no harm in that.
All you can really want from a reviewer, is that they truthfully report their testing enviroment, and their results. I dont doubt Anandtech does that.

I actually like this site, mainly for the forums, but their reviews are quite good too I think.
 

OCGuy

Lifer
Jul 12, 2000
27,224
37
91
Anandtech does like to show off intel in a good light, but that doesnt mean hes falsifying benchmarks, or getting payed off by intel.

Uhh....Anand just presents Data. Right now, Intel has vastly superior CPUs than AMD. The only place this isn't true is AMDzone.com, which I am sure you are familiar with.