7950 vs GK-104

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Feb 19, 2009
10,457
10
76
In a perfect world, it would be a direct die shrink of 580, with some additional tweaks and a clock speed bump. TDP would be nearly halved, to ~150W (No, 580s isn't 250W, thats NV marketing specs and driver throttling). Such a GPU would be ~580 performance in the midrange.
 

rgallant

Golden Member
Apr 14, 2007
1,361
11
81
The GTX 460 didn't smoke the GTX285 either.

perfrel_1920.gif


perfrel_2560.gif


I think it was BFG10K that had a review where he was annoyed that the GTX460 with exotic AA levels was actually slower.


So I wouldn't be shocked if the GK104 was GTX580 performance levels.

But all this depends of the actual architecture and state of the 28nm process.



-yea but your forgetting people saved $ 50.00 on buying a card that could not pay games at the time or today ,and have now upgraded ,not sure if saving $50 was = to paying 300.00+ to replace it.
-total out lay $500+ ,bang for the buck to play at low settings ,then and now lol
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,732
432
126
-yea but your forgetting people saved $ 50.00 on buying a card that could not pay games at the time or today ,and have now upgraded ,not sure if saving $50 was = to paying 300.00+ to replace it.
-total out lay $500+ ,bang for the buck to play at low settings ,then and now lol

I'm sorry, but I don't seem to understand your post.

Is it something about midrange cards not being worth it?

Depends when you buy the cards - at launch high end cards generally have a high premium.

And of course if you bought a gtx285 you would upgrade to a $500 GtX480 and likewise a GTX460 was a worthy upgrade to card in the 4850 range or lower (and debatable over a 4870/GTX260).
 
Last edited:

Arzachel

Senior member
Apr 7, 2011
903
76
91
right, but you're not accounting for a GTX760 being produced on a process node a full level lower, ie 40nm -> 28nm. With specs that should put it around ~10% faster than a 580, there's no way it'll come close to 250W TDP unless they give it some absolute monster clockrates and/or actually spec it to be that much better than the 580, of which either of the latter scenarios will mean the 760 will be that much faster than the 580.

Which wouldn't really make sense as it would leave very little room for the 780 to stretch its legs unless nVidia pulls a stunt where the 780 is an absolute monster chip that is given very conservative clock rates just to meet TDP and then leaves it up to consumers to clock it beyond spec to get mind blowing performance while shattering traditional TDP levels...not unlike what we've seen with dual GPU card solutions in the past.

That's my point, a GTX 760 more powerful than a GTX 580 would leave no room for a GTX 780 part. And I don't believe, that Nvidia would break the pci-e spec, a battle of efficiency is a battle they've already lost.
 

Gikaseixas

Platinum Member
Jul 1, 2004
2,836
218
106
I bet AMD is working on the 7950 to be at least a few % faster than the GTX 580. The "unofficial" GTX 760 could very well perform between the two top AMD cards. The GTX 780 will probably be faster than the 7970 and prompt AMD to release a 7980 to counter it... and the game goes on
 

bunnyfubbles

Lifer
Sep 3, 2001
12,248
3
0
That's my point, a GTX 760 more powerful than a GTX 580 would leave no room for a GTX 780 part. And I don't believe, that Nvidia would break the pci-e spec, a battle of efficiency is a battle they've already lost.

the GTX460 turned out to be as fast if not ~10% faster than the 285 all while consuming ~70-80 less under load

why couldn't the 760 be just as fast if not a tad faster than the 580?

I really think you're completely failing to account for an entire process node shrink, this isn't just a half node moving from 40nm -> 32nm, we're going down to 28nm from 40nm...

the 780 will have plenty of room to distance itself, and just like AMD, nVidia can stick with extremely conservative clockrates to preserve TDP yet still appeal to enthusiasts with extreme overclocking potential

also, it remains to be seen as far as efficiency goes. AMD has won the efficiency battle several generations in a row, however their move to GCN was to vastly improve HPC performance and it seems to have cost them some efficiency in both game performance and power efficiency provided they're currently performing behind the power levels established by the 6900s in order to secure a substantial lead over the 580. On the flipside nVidia has long since made the move to have a large focus on HPC performance dating back to the 8800s with CUDA and thus they don't really have to sacrifice anything with any design changes.
 
Last edited:

jmarti445

Senior member
Dec 16, 2003
299
0
71
Look at the history. Things are "usually" doubled. Or at the very least increased by 50%.

Start with 5800Ultra with 8 pipes. 6800Ultra with 16 pipes. 7800/7900 with 24 pipes.
Then on to CUDA arch:
8800Ultra 128 CUDA cores. GTX280 240 CUDA cores. GTX480 480 CUDA cores.
Die shrink scaling be damned. Know what I mean?
And as far as this drastic architectural change? We know nothing about it so I would (for now) put a pin in that for later when it's closer to launch and leaks start to manifest.

Its not really doubled despite the core changes, the 480 had the shader speed double of what the base clock speed was, the 280 had the clock speed of 2.25 clock rate and the 8(9)000 series had a shader speed of 2.5 over base clock. In addition, I think I remember that the 280 series had the most complex compute capabilities out of any of the Geforce GPUs and the 480 series was a compromise between the 8000 series and the 280 when it came to stuff like registers and complexity. Its one of the reasons why ATI really needed until the 5000 series to beat the 200 series of the Geforce cards and why Nvidia has had some trouble taking back the performance crown from ATI ever since.

I'm interested to see what Kepler is going to be like cause I've heard everything from the shader speed will be up 3Ghz to its going to equal the base core clock speed. I personally feel like they will add cores and just keep the shader speed double base clock.
 

Arzachel

Senior member
Apr 7, 2011
903
76
91
the GTX460 turned out to be as fast if not ~10% faster than the 285 all while consuming ~70-80 less under load

why couldn't the 760 be just as fast if not a tad faster than the 580?

I really think you're completely failing to account for an entire process node shrink, this isn't just a half node moving from 40nm -> 32nm, we're going down to 28nm from 40nm...

the 780 will have plenty of room to distance itself, and just like AMD, nVidia can stick with extremely conservative clockrates to preserve TDP yet still appeal to enthusiasts with extreme overclocking potential

also, it remains to be seen as far as efficiency goes. AMD has won the efficiency battle several generations in a row, however their move to GCN was to vastly improve HPC performance and it seems to have cost them some efficiency in both game performance and power efficiency provided they're currently performing behind the power levels established by the 6900s in order to secure a substantial lead over the 580. On the flipside nVidia has long since made the move to have a large focus on HPC performance dating back to the 8800s with CUDA and thus they don't really have to sacrifice anything with any design changes.

I'm not ignoring the shrink. Hell, Nvidia could release a GTX 760 that's 50% faster than a GTX 580, but they have 3 tiers above the GTX 760, and each has to have enough difference in performance to not cannibalize each other. Doing that and not hitting the 300W wall is the issue here and why I think the GTX 760 or whatever Nvidia ends up naming their midrange offering will perform slightly below a GTX 580.

And now that I think about it, they could maybe drop the GTX 465/560ti tier and change the GTX 760 from a midrange part into a sweet spot one. That being 10% faster than a GTX 580 would be reasonable. Still, people expecting 80% jump for the top end without breaking the pci-e limit will be dissapointed.

As for GCN, the HD 7970 actually performs somewhat close to a 2048 sp VLIW4 card on 28nm and it will be interesting to compare the HD 7870 with the HD 6970, but currently it seems that the cost for the increased HPC capabilities has been quite minor.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
I'm not ignoring the shrink. Hell, Nvidia could release a GTX 760 that's 50% faster than a GTX 580, but they have 3 tiers above the GTX 760, and each has to have enough difference in performance to not cannibalize each other. Doing that and not hitting the 300W wall is the issue here and why I think the GTX 760 or whatever Nvidia ends up naming their midrange offering will perform slightly below a GTX 580.

I seem to recall NV wanting PCISIG to boost the max power draw per slot to 400W. This didn't happen in v3.0 of course. Nonetheless, I believe max power draw on the 580 is above 300W, IIRC. So there must be some lattidude for fiddling around to 'reach' 300W. And I think we all recall the debacle with the 480 where NV was pressuring partners to put max power of 300W on the retail boxes for the 480 (even when it wasn't true).

JHH, has made point of saying that NV's engineers have spent a lot of time and effort improving efficiency on Kepler. We will have to wait and see how well they have done this.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
I'm not ignoring the shrink. Hell, Nvidia could release a GTX 760 that's 50% faster than a GTX 580, but they have 3 tiers above the GTX 760, and each has to have enough difference in performance to not cannibalize each other. Doing that and not hitting the 300W wall is the issue here and why I think the GTX 760 or whatever Nvidia ends up naming their midrange offering will perform slightly below a GTX 580.

Is there a reason you are ignoring GF114? The GTX560TI has 66-75% of the performance of the GTX580 and using around 63% of the power. And the GTX560TI is not "slightly below a GTX 285". :)
 

Riek

Senior member
Dec 16, 2008
409
15
76
i bet the 760 will be faster than the 580... but then again 760 isn't the kepler generation but a possible nametag of the generation after it.... (maxwell?) so that should be expected at that moment. Why we are discussion that generation atm is unclear to me.

As far as i know the next generation will be 6xx series and not 7xx. (given they already started with renames for that generation in the mobile segment).
 

Arzachel

Senior member
Apr 7, 2011
903
76
91
I seem to recall NV wanting PCISIG to boost the max power draw per slot to 400W. This didn't happen in v3.0 of course. Nonetheless, I believe max power draw on the 580 is above 300W, IIRC. So there must be some lattidude for fiddling around to 'reach' 300W. And I think we all recall the debacle with the 480 where NV was pressuring partners to put max power of 300W on the retail boxes for the 480 (even when it wasn't true).

JHH, has made point of saying that NV's engineers have spent a lot of time and effort improving efficiency on Kepler. We will have to wait and see how well they have done this.

Powertune.

Nvidia uses something similar iirc. And it's far from being as rosy as the writeup would suggest because when the TDP outliers are games for which you buy the high-end GPU in the first place, the dynamic TDP is misleading at best. Both Nvidia and Amd should cut it out because it only makes things more difficult for the user, as the amount of "Is this PSU enough???" threads show.

Still, you can only skew the numbers so far until you you have to admit that the TDP you state in the spec might be too optimistic.

sontin said:
Is there a reason you are ignoring GF114? The GTX560TI has 66-75% of the performance of the GTX580 and using around 63% of the power. And the GTX560TI is not "slightly below a GTX 285". :)

GTX 560Ti != GTX 560. As I said, if Nvidia has pulled the GTX 760 to a tier that was previously occupied by the GTX 465 or the GTX 560Ti, then 10% faster than a GTX 580 is reasonable.
 

bigi

Platinum Member
Aug 8, 2001
2,490
156
106
The chart you posted is false and wrong in so many ways it's not even funny. Seems like Nvidia are cooking something up for late March/early April, if you're willing to wait.

Moreover.........it has bunch of sausages in the background.
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
That's my point, a GTX 760 more powerful than a GTX 580 would leave no room for a GTX 780 part. And I don't believe, that Nvidia would break the pci-e spec, a battle of efficiency is a battle they've already lost.

I really can't imagine what on earth makes you say there would be no room for a GTX780 part. Why? It's like you're ignoring, or keep forgetting, that there is a die-shrink involved. That's the only reason I can think of for you saying this.

What reasoning is there to suppose that a GTX760 will consume the same or more power than a current GTX580 when a GTX580 is on 40nm and the supposed GTX760 would be on 28nm. You have totally lost me here.

Please elaborate in as great a detail as you can because I am really missing something here. And what you posted above to bunnyfubbles doesn't really make all that much sense.

Thanks bud.
 
Last edited:

Arzachel

Senior member
Apr 7, 2011
903
76
91
I really can't imagine what on earth makes you say there would be no room for a GTX780 part. Why? It's like you're ignoring, or keep forgetting, that there is a die-shrink involved. That's the only reason I can think of for you saying this.

What reasoning is there to suppose that a GTX760 will consume the same or more power than a current GTX580 when a GTX580 is on 40nm and the supposed GTX760 would be on 28nm. You have totally lost me here.

Please elaborate in as great a detail as you can because I am really missing something here. And what you posted above to bunnyfubbles doesn't really make all that much sense.

Thanks bud.

I have never said that a GTX 760 would consume as much power as a GTX 580, nor am I ignoring the die shrink.

Looking at the HD 7970, the GTX 780 going much past 40% faster than the GTX 580 seems unrealistic due to them not having much TDP headroom. Now drop 15-20% each for the GTX 570 and the GTX 560Ti successor, and the GTX 760 works out to be slightly slower than the GTX 580. As I've said, this all falls apart, if Nvida changes up their lineup, but that should still be true for whatever they choose to name their midrange card.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
TDP headroom on 40nm (the 580). 28nm drops power consumption considerably. Additionally, Nvidia focuses on perf/W this time. Finally, the 680 might still use a bit more than the 7970 (maybe close to the 580) and translate that into more performance. No way the top dog is only 40% faster than the 580.

I expect the 680 to land at 580 +60% and the 660 at 580 +10-20%