Which GPU do you think have aged the worst in the last 3 years?

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Feb 19, 2009
10,457
10
76
GCN biggest weakness is the shader compilation procedure in D3D. The D3D bytecode is a really outdated IR, and Microsoft complier optimizations are harmfull for the modern GCN-like designs. AMD need to deoptimize the code, to ensure a better compilation to the hardware. On Xbox One the same shader is nearly 20-25% faster compared to PC. With a more robust IR (perhaps SPIR-V?) GCN can easily win +10-15% performance in general.

Is that still true in DX12 and Vulkan (SPIR-V) or only DX11?

Edit: If I understood it correct, currently GCN is gimped due to D3D compiler being sub-optimal for it (or rather it was designed sub-optimally for DX11).. as the Xbone is a lot faster for the same task, as its on a DX12-like API, and so, DX12 will give GCN a uplift besides anything else.
 
Last edited:

Face2Face

Diamond Member
Jun 6, 2001
4,100
215
106
I'm still rocking big Kepler. The recent driver updates for games like TW3 and GTA V have made some huge overall improvements. I didn't spend $650+ and my card overclocks well, so I guess I'm not feeling the Kepler burn as bad as others.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Lets say it is true...



... then why it didn't happened in this example where AMD tackled NV's high end (gk110) with their mid range (hawaii).

Some designs have more bottlenecks than the others. Some have functions that are yet to be used.

Mid range? Hawaii was their top GPU for almost two years.
 

zlatan

Senior member
Mar 15, 2011
580
291
136
Is that still true in DX12 and Vulkan (SPIR-V) or only DX11?
It's true for DX12, but not for Vulkan. SPIR-V is a much better IR.

Edit: If I understood it correct, currently GCN is gimped due to D3D compiler being sub-optimal for it (or rather it was designed sub-optimally for DX11).. as the Xbone is a lot faster for the same task, as its on a DX12-like API, and so, DX12 will give GCN a uplift besides anything else.

Microsoft just use too agressive optimization for their D3D bytecode compiler and this is not good for GCN. The Xbox One uses different compilers, these are designed for GCN.

And this is not an AMD-only problem. Even Nvidia get some penalty, but not that much.
 

dogen1

Senior member
Oct 14, 2014
739
40
91
GCN biggest weakness is the shader compilation procedure in D3D. The D3D bytecode is a really outdated IR, and Microsoft complier optimizations are harmfull for the modern GCN-like designs. AMD need to deoptimize the code, to ensure a better compilation to the hardware. On Xbox One the same shader is nearly 20-25% faster compared to PC. With a more robust IR (perhaps SPIR-V?) GCN can easily win +10-15% performance in general.

This has me thinking a bit. How much extra work would this "deoptimization" entail for the driver? Any idea how much it contributes to the extra overhead AMDs drivers seem to have vs nvidia?
 
Last edited:

Piroko

Senior member
Jan 10, 2013
905
79
91
So you guys care more about your GPU performance at the end of its life cycle, rather than the beginning of its life cycle (when you're turning down settings anyway and looking to upgrade).
I think it's specifically that timeframe that is the most interesting one. Or, well, interesting to some at least.

Look at it from this perspective, two months after you bought the new card you're likely still playing those games that you knew were performing at x fps thanks to benches on the net. There's little reason to do anything else but look out for confirmation bias and if you can't find any, "Meh, I don't care".
But two years later you just happen to buy that new action-RPG and want to play the snot out of it. So you start it up, disable the settings that you know will hammer the performance on any card, put everything else to medium-high and off you go. Four hours down the line you enter the first major city and...

- you see fps in the low 40's with occassional stutters, don't want to turn settings down now that you got accustomed to those visuals, prompting you to impulse-buy a new card in august or

- you see fps in the high 40's with no stutters and continue to finish that game on this card, with the aim to get a good deal on black friday - which has a fair chance to get you much higher performance for the same amount of money that you would have spent in august otherwise.

Just a Gedankenspiel with absolutely no basis on real events, I assure you.


On a sidenote, I just looked at the Steam stats.
I took a moment to realize that the HD7900 Series still has 1.84%, losing only 0.11% from may results while the GTX670 and GTX680 have a combined 0.64%+0.43% = 1.07%, dropping 0.18% since may. not that those stats are particularly good, but another data point that seems to support this thread.
 

tential

Diamond Member
May 13, 2008
7,348
642
121
I think it's specifically that timeframe that is the most interesting one. Or, well, interesting to some at least.

Look at it from this perspective, two months after you bought the new card you're likely still playing those games that you knew were performing at x fps thanks to benches on the net. There's little reason to do anything else but look out for confirmation bias and if you can't find any, "Meh, I don't care".
But two years later you just happen to buy that new action-RPG and want to play the snot out of it. So you start it up, disable the settings that you know will hammer the performance on any card, put everything else to medium-high and off you go. Four hours down the line you enter the first major city and...

- you see fps in the low 40's with occassional stutters, don't want to turn settings down now that you got accustomed to those visuals, prompting you to impulse-buy a new card in august or

- you see fps in the high 40's with no stutters and continue to finish that game on this card, with the aim to get a good deal on black friday - which has a fair chance to get you much higher performance for the same amount of money that you would have spent in august otherwise.

Just a Gedankenspiel with absolutely no basis on real events, I assure you.


On a sidenote, I just looked at the Steam stats.
I took a moment to realize that the HD7900 Series still has 1.84%, losing only 0.11% from may results while the GTX670 and GTX680 have a combined 0.64%+0.43% = 1.07%, dropping 0.18% since may. not that those stats are particularly good, but another data point that seems to support this thread.
I've seen nothing like what you've presented displayed in the thread though. If that's the case sure.

I'm not seeing cases where we're showing unplayable kepler vs playable Hawaii and I don't see the cases where Hawaii was unplayable numerous times at the launches of games being brought up as well. Or any of Hawaii other pitfalls which are numerous.

I get what you're saying though, I just don't see anything of the sorts being presented.
 

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
I've seen nothing like what you've presented displayed in the thread though. If that's the case sure.

I'm not seeing cases where we're showing unplayable kepler vs playable Hawaii and I don't see the cases where Hawaii was unplayable numerous times at the launches of games being brought up as well. Or any of Hawaii other pitfalls which are numerous.

I get what you're saying though, I just don't see anything of the sorts being presented.

Kepler aging is not not always enough to demand lowering the settings and/or accepting lower FPS compared to a GCN. But it can be. You did see cases of the bolded above, 2 pages ago.

http://www.techspot.com/articles-info/921/bench/1680.png
http://www.techspot.com/articles-info/956/bench/1920.png

290 vs 780. That could be the difference between smooth 60fps V-Sync and not. That's the risk of gradual stagnation. A year and a half ago, both cards in X Game might just squeeze out 60fps for V Sync. Now, in 2015 in Y game, one card might just make the cut and the other will make you not be able to accept V Sync or instead lower your settings. It's not universal and not always drastic, but these situations will arise with once comparable products when one starts to decline.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Lets say it is true...



... then why it didn't happened in this example where AMD tackled NV's high end (gk110) with their mid range (hawaii).

Some designs have more bottlenecks than the others. Some have functions that are yet to be used.

This does not compute, Erenhardt. You're saying that Hawaii was a mid range product from AMD? Please elaborate.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
So was the gk110. Same situation.
Hawaii was 25% smaller than gk110
gk106 was almost 25% smaller than tahiti

Oh I see. That alternate universe logic. Ok then.
Suddenly die size is what designates mid range or top end across competitors.

To this day, you guys simply can not handle GK104 being a mid range product very well.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Oh I see. That alternate universe logic. Ok then.
Suddenly die size is what designates mid range or top end across competitors.

To this day, you guys simply can not handle GK104 being a mid range product very well.

I guess, if we look at transistors, GM200 is NV's midrange GPU.

GK110 has 16% more than Hawaii XT.
Fiji XT has 11% more than GM200.

:p
 

iiiankiii

Senior member
Apr 4, 2008
759
47
91
Oh I see. That alternate universe logic. Ok then.
Suddenly die size is what designates mid range or top end across competitors.

To this day, you guys simply can not handle GK104 being a mid range product very well.

Agree that the GK104 is a mid range product. But, MARKETED as a top tier card. The GTX 680 was position as the top tier card. As such, it was priced as a high end card.

Come to think of it, GTX 680 owners should feel the most screwed for paying high end pricing for a mid range card.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Agree that the GK104 is a mid range product. But, MARKETED as a top tier card. The GTX 680 was position as the top tier card. As such, it was priced as a high end card.

Come to think of it, GTX 680 owners should feel the most screwed for paying high end pricing for a mid range card.

How do you think an AMD owner who paid MORE for an HD 7970 would feel went it lost to that overpriced "mid-range" card?
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
I guess, if we look at transistors, GM200 is NV's midrange GPU.

GK110 has 16% more than Hawaii XT.
Fiji XT has 11% more than GM200.

:p

Things can appear any way one wishes them to if they try hard enough. All in the way they want to look at things.
 

iiiankiii

Senior member
Apr 4, 2008
759
47
91
:biggrin: Haha, honestly wasn't expecting that answer. Appreciate the honesty.

I love it. My hat is off to you iiiankiii. I expected the worst but was pleasantly surprised. :thumbsup:

It's true. As always, it's easy to feel screwed when something newer and better comes along for less money. The GTX 680 was very well positioned at the expense of us, the customers. Nvidia played their hand very well in that respect. From a performance standpoint, the GTX 680 and the 7970 were competitive enough to where 7970 owners felt it wasn't necessary to upgrade (to).

Of course, in hindsight, the 7970 isn't too bad. As the thread suggests, in this context, the extra RAM and consoles being GCN based are really giving the 7970 some resiliences when compared to the GTX 680. DX12 will only help strengthen GCN based cards.

Given the choice between the 7970 and the GTX 680, I think the 7970 would be the better card to have right now; which isn't to say the GTX 680 isn't a competitive card. The trend seems to point to the 7970 holding up a bit better than the GTX 680.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
The 7970 sings even today with the typical 1100-1200 overclock you can get on it and the 3 GB vram. It's pretty clear that the 7970 has been shown to be better than the 680. This is even before DX12 gains if any kick in.
 

Paratus

Lifer
Jun 4, 2004
17,524
15,568
146
The 7970 sings even today with the typical 1100-1200 overclock you can get on it and the 3 GB vram. It's pretty clear that the 7970 has been shown to be better than the 680. This is even before DX12 gains if any kick in.

I'm still rocking the 7970GHZ too and it plays anything I own at 19x12 with everything turned up.

Actually I've found good longevity with every AMD/ATI card I've owned. Each time I've been in the market NV hasn't had a compelling card for me:

  • 2004 9600XT (competition NV FX5700 FX vs 9XXX series enough said)
  • late 2006 1950Pro 512 AGP (competition crippled 16pipe 7800GS -NV dropped the AGP market)
  • 2010 5870 (Competition GTX 280/285 New DX11 vs DX10)
  • late 2012 7970GHZ. (Competition GTX 680. This one was close but the 7970 was a gift)

I've had a lot of luck with how long my AMD cards have lasted. At least 2.5 years or more each. Good for me but not great for AMDs bottom line.
 

dogen1

Senior member
Oct 14, 2014
739
40
91
Is that still true in DX12 and Vulkan (SPIR-V) or only DX11?

Edit: If I understood it correct, currently GCN is gimped due to D3D compiler being sub-optimal for it (or rather it was designed sub-optimally for DX11).. as the Xbone is a lot faster for the same task, as its on a DX12-like API, and so, DX12 will give GCN a uplift besides anything else.

If I remember right, the D3D bytecode is suboptimal because it was designed for vector architectures, whereas modern gpus are scalar. I don't think DX12 changes any of this, but I don't know for sure.