• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

GF110 is actually GF100B

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Well in that case 6970 better beat the crap out of it if it is only a 480 re-brand!

That would be embarassing if a next-gen part couldnt beat a re-touched 480. That would be like 5870 not beating GT200b. 😉

Well It won't be an A-- kicking now that the 580 is out but Ya the 6970 will win most benchies . As I am sure that AMD will insist on the games being used in the reviews coming up. If the review sites don't comply with AMDS requist I doubt they will get anymore review cards. or cpus.
 
Last edited:
Oh no..the die-size argument.....sorry, not going there. Although you know it is bad when that is what you grasp on to.

Then again, we have no idea what the 6970 is going to bench at, so this is all just speculation anyway.

BS, We have all kinds of ideas . Heat sinks are interesting are they not and than the hours of testing under exact controlled conditions of high usage in small room . It gets ever so interesting.
 
Well It won't be an A-- kicking now that the 580 is out but Ya the 6970 will win most benchies . As I am sure that AMD will insist on the games being used in the reviews coming up. If the review sites don't comply with AMDS requist I doubt they will get anymore review cards. or cpus.

That would be interesting....they would catch a lot of flak for that.
 
NV fanboys unite! Form Voltron!

OP this is no surprise to anyone, it's already known GTX 580 is a GTX 480 with 512 shaders instead of 480 and a higher core clock.

Also with lower temps, less power draw, quieter operation, and faster clock for clock performance. But other than that, you're spot on!
 
Well in that case 6970 better beat the crap out of it if it is only a 480 re-brand!

That would be embarassing if a next-gen part couldnt beat a re-touched 480. That would be like 5870 not beating GT200b. 😉
Indeed. Kind've like a 6870 not beating a 5870.
Wait a second....

Honestly, regarding "GF110 is actually GF100B"...does anybody really care what they call it?
 
That would be interesting....they would catch a lot of flak for that.

Who cares if there is flak? After the reviews I have seen in the last 6 weeks . You either play by AMDs rules or they make you Buy retail . Which is how it should be anyway.
Many of us see clear the events of the last 6 weeks. WHO CARES ABOUT FLAK?
 
Indeed. Kind've like a 6870 not beating a 5870.
Wait a second....

Honestly, regarding "GF110 is actually GF100B"...does anybody really care what they call it?

What part about the 6870 being the Middle value in GPUs don't you understand. This generations Midrange is close to the top card of last generation in single card performance. IF you XF them the 6870 wins against the 5870 . Your picking your battle lines.Get over it in per high performance 6870 Xfired beats the 5870 . so your thinking The AMD high end card 6950/6970 are cheesepuffs . Because there is no way around it.

So the 6950 according to your reasoning will be what 20/25% better than the 6870. LOL . Than the 6970 will be what 15% faster than the 6950 is this correct . Well you just keep that thought . Right up to 28nm gpus are released. But someday maybe just maybe reality will set in.
 
Last edited:
Also with lower temps, less power draw, quieter operation, and faster clock for clock performance. But other than that, you're spot on!

Ya blew right by my pea sized brain there fella.So you got a link to were the 580 disabled cores to equal the 480 so as to get a Clock for clock compare at same gpu frequency?
 
Also with lower temps, less power draw, quieter operation, and faster clock for clock performance. But other than that, you're spot on!

Let's see, starting with power draw. Less under Furmark/OCCT because of throttling and at idle and 2D. It draws more during gaming. The difference is minor though and it's definitely more efficient. There's more than just the GPU drawing power though. I've seen HD5970 running @ 850MHz w/4Gig RAM draw less than reference designs because of binning and other components on the card.

Lower temps = better cooler

Faster clock for clock = more shaders.

None of this though proves that it's the same chip, just tweaked, or a new design.
 
Let's see, starting with power draw. Less under Furmark/OCCT because of throttling and at idle and 2D. It draws more during gaming. The difference is minor though and it's definitely more efficient. There's more than just the GPU drawing power though. I've seen HD5970 running @ 850MHz w/4Gig RAM draw less than reference designs because of binning and other components on the card.

Lower temps = better cooler

Faster clock for clock = more shaders.

None of this though proves that it's the same chip, just tweaked, or a new design.

Yeah.

gtx580_power.png
 
Oh no..the die-size argument.....sorry, not going there. Although you know it is bad when that is what you grasp on to.

Then again, we have no idea what the 6970 is going to bench at, so this is all just speculation anyway.

The "die size argument"? And what am I grasping to?

Let me try to get this right:
You make unrealistically exaggerated claims of when people should be disappointed, and argue that looking at reality of technology with real world numbers and limitations is grasping for straws?

I don't really follow where you're going 🙂


I'm talking real world technology. Not expectations of magic because it has a new name. This goes for both nvidia or Amd. Semiconductor technology development is shrink driven. It's called Moores law. When has ever architecture given an improvement of more than 10-15% between generations? never.


Does your reasoning go as well for 6870 and 6850? Are they major disappointments as they did not dethrone GTX580/GTX480? If perf/mm2 is grasping for straws then I guess all related parameters depending on it is also grasping for straws: price, perf/W, choice of market segment etc?

Finally I understand where you coming from. It is a new product. It has a new name. Disregarding of everything else this means that it has to be the fastest chip on the market. Everything is an disappointment. Correct me if I'm wrong.

I'll keep to my version of reality: If Cayman shows up as a 400mm2 chip and runs close to a 530mm2 chip I will be quite impressed and surprised. I'll also expect that achievement to show in quite competitive prices.

I'll tell you already now that if this is true then market will not agree with you and treat HD6970 as quite the success. So you'll be very alone with your disappointment.
 
Last edited:
Also with lower temps, less power draw, quieter operation, and faster clock for clock performance. But other than that, you're spot on!


That's due to improved cooling and implementation of power regulation. If the chip is helping out in any way we don't know.
 
I think I'd need Idontcare's expertise to back me up on this, but a B-layer spin implies only changes to the metal layer. GF110 has a different transistor count, and as Anand already showed in clock-for-clock comparisons, GF110 does better than GF100's 512 core theoretical performance.
The GTX285 was a die-shrink down to 55nm, and that was referred to as the GT200b.
 
By the same token, if a 512SP Fermi is what AMD expected for the original Fermi release, what the hell were they thinking with the 5870?

That it would be a good mid-high end part that is much cheaper to make than a 512SP Fermi part, and would be priced accordingly. The 5970 would take on Nvidia's fastest.
 
It's an internal code name.
It could be ElephantMeister555 and it would still end up giving us the same GPU.

I'm not sure NV ever advertised it as being more than a refined GF100. They just decided to name it GF110 instead of GF100b.
 
Right, the name mentioned could be what the developer thought the GPU type would be. It's all a translation from the hex/binary values put in place by the person who wrote that code for nvflash.
 
Ya blew right by my pea sized brain there fella.So you got a link to were the 580 disabled cores to equal the 480 so as to get a Clock for clock compare at same gpu frequency?

Faster clock for clock = more shaders.

Anandtech's normalized scores of the gtx580 vs. gtx480 showed instances that when the gtx580 was clocked at 700mhz, it performed better than the theoretical imrpovements of adding more shaders. Nemesis, I hope you can, as you put it, wrap your "pea sized brain" around that one fella. My point stands. GTX580 performs better clock for clock, AND shader for shader, than gtx480.

33984.png
 
Last edited:
I think I'd need Idontcare's expertise to back me up on this, but a B-layer spin implies only changes to the metal layer. GF110 has a different transistor count, and as Anand already showed in clock-for-clock comparisons, GF110 does better than GF100's 512 core theoretical performance.

This is something entirely different. You are thinking of stepping revisions (masks) whereas the topic is in regards to a codename itself.

But in regards to stepping revisions, the typical delineation is that things changed in the BEOL (metal layers) that don't require new masks for the stuff in the FEOL (xtors) get dubbed by mere incrementation of the number that follows the stepping letter (A1 -> A2) whereas things that change the xtor masks are denoted by way of incrementing the alphabet letter itself (A1 -> B3).

There is no "law" regarding stepping enumeration though. Everyone just conforms for sake of convenience, there is no penalty incurred for violating the nomenclature.
 
Well in that case 6970 better beat the crap out of it if it is only a 480 re-brand!

That would be embarassing if a next-gen part couldnt beat a re-touched 480. That would be like 5870 not beating GT200b. 😉


I'm sure the 6970 will feel quite embarassed :whiste:
 
Anandtech's normalized scores of the gtx580 vs. gtx480 showed instances that when the gtx580 was clocked at 700mhz, it performed better than the theoretical imrpovements of adding more shaders. Nemesis, I hope you can, as you put it, wrap your "pea sized brain" around that one fella. My point stands. GTX580 performs better clock for clock, AND shader for shader, than gtx480.

33984.png

So Anand normalized the results and used the theoretical improvements of more shaders.

Well after your reply here I took the pea sized brain I have to formulate that you sir are the proud owner of a mustard seed sized brain . Anand proved nothing.
The same as him not knowing if the SB preview he did was with a 6 or12 Eu part. If anand was all that he would be engineering this stuff rather than writing articles about hardware

All Anand did was guess and it was a bad guess. Whats the differance between a 470 and a 480?

Its like the way you guys determined the performance improvement between the 480 and the 580.
Anand did a future mark run. But didn't show the results of performance differance. Just the efficiency improvement . Had he given score than the performance differance would not be all that . 9% . If you through out the high and low scores the performance differant shrinks also because the future mark score was never given . So you can't through that one out as it was never included in the amazing math I seen performed here on this forum.
Most eveyone here sees the way the NV fanbois work . i can't wait till You see the 6970 scores and all the tale wagging dog that will follow those benchmarks.

Post 44 above second graph. check out differance between 470 and 480 than take time to check out the performance differance between the 470 /480.



The continuation of accumulating points and infractions now is going to have some teeth.
Continuation of insulting members has cost you a week off.


esquared
Anandtech Forum Director
 
Last edited by a moderator:
So Anand normalized the results and used the theoretical improvements of more shaders.

Well after your reply here I took the pea sized brain I have to formulate that you sir are the proud owner of a mustard seed sized brain . Anand proved nothing.
The same as him not knowing if the SB preview he did was with a 6 or12 Eu part. If anand was all that he would be engineering this stuff rather than writing articles about hardware

All Anand did was guess and it was a bad guess. Whats the differance between a 470 and a 480?

Clock speed going from 470 to 480 - 15.3%
Shader increase going from 470 to 480 - 9%
Memory Bandwidth increase going from 470 to 480 - 32%

Based on clock speeds and shader count increases alone - which would be theoretical increase of 26% in performance - and not even factoring in the memory bandwidth increase, I assure that gtx480 does not outperform it's theoretical increase over the gtx470. I do not appreciate the insult, but even my "mustard seed" brain is proving you wrong again and again. You really should consider typing out cohesive, rational messages before diving into insulting other people and engaging in arguments you lose from the start.
 
Last edited:
Anandtech's normalized scores of the gtx580 vs. gtx480 showed instances that when the gtx580 was clocked at 700mhz, it performed better than the theoretical imrpovements of adding more shaders. Nemesis, I hope you can, as you put it, wrap your "pea sized brain" around that one fella. My point stands. GTX580 performs better clock for clock, AND shader for shader, than gtx480.

33984.png

'While we were doing our SLI benchmarking we got several requests for GTX 580 results with normalized clockspeeds in order to better separate what performance improvements were due to NVIDIA’s architectural changes and enabling the 16th SM, and what changes are due to the 10% higher clocks. So we’ve quickly run a GTX 580 at 2560 with GTX 480 clockspeeds (700Mhz core, 924Mhz memory) in order to capture this data. Games that benefit most from the clockspeed bump are going to be memory bandwidth or ROP limited, while games showing the biggest improvements in spite of the normalized clockspeeds are games that are shader/texture limited or benefit from the texture and/or Z-cull improvements.'

What you get from the review is the charts are to show what improvements come from the included shaders and what improvements come from the clock speed increase.

Clock for clock a 580 is 6% faster than a GTX 480, that 6% is coming from the additional shaders. If you stripped a 580 of the additional SM and then ran it at the same clock as a GTX 480, they would perform the same. At best you might see a 1% improvement, maybe.

Considering clock for clock the 580 is 6% faster than a 480, it's safe to say that improvement is from the additional shaders.

My overclocked 480s are faster than stock 580s, there is not much of anything new in a 580 of merit beyond the improved thermals and reduced noise, other than that you've got a GTX 480 with 512SP and an overclock.
 
Back
Top