Which GPU do you think have aged the worst in the last 3 years?

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Feb 19, 2009
10,457
10
76
Heavily optimized for nvidia hardware. Translated to English = crappy AMD drivers some would lead you to believe.

According to the dev (Software Engineer at their studio, on their official forum back then), PhysX at 600 times per second will do that.

It's hammer time for the CPU on AMD as the main thread get bottleneck, GPU usage drops. NV has DX11 multi-thread rendering, so they can utilize other threads in the event that something is bottlenecking the CPU... like PhysX being polled at 10x the FPS (just because).

You could say because its an NV sponsored title, they can abuse their advantage of DX11 multi-thread drivers since AMD focus on Mantle/Vulkan/DX12 instead of putting resources to support MT on DX11. So in that sense, AMD only have themselves to blame when games like this pops up.
 

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
I love how this whole conversation is based on which card ended up being faster...

So you guys care more about your GPU performance at the end of its life cycle, rather than the beginning of its life cycle (when you're turning down settings anyway and looking to upgrade).

That's wild.

Let's not get into thread crapping. The entire point of this thread is to discuss GPU performance aging years after they launched. It's a good topic for those with 2 to 3+ year upgrade cycles.
 
Last edited:

maddie

Diamond Member
Jul 18, 2010
5,159
5,545
136
I love how this whole conversation is based on which card ended up being faster...

So you guys care more about your GPU performance at the end of its life cycle, rather than the beginning of its life cycle (when you're turning down settings anyway and looking to upgrade).

That's wild.
Seeing that the recommended best purchasing strategy by you and others for video card upgrades is buying the second tier and selling your old card to offset the price, then I can absolutely see the relevance of performance after a year or two as being highly relevant.

What do you see as the resale price of a card that is seen as pretty useless relative to the competition?
 

tential

Diamond Member
May 13, 2008
7,348
642
121
Seeing that the recommended best purchasing strategy by you and others for video card upgrades is buying the second tier and selling your old card to offset the price, then I can absolutely see the relevance of performance after a year or two as being highly relevant.

What do you see as the resale price of a card that is seen as pretty useless relative to the competition?
Have you seen any gpu values move in such a way in 2 years in that they are now useless?
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
All Nvidia GPUs based on Kepler micro-architecture except for the GTX 780 Ti ...

And all GPUs that have 2GB of video memory and under are worthless as well but I can't rag on them a whole lot since you get what you pay for ...
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
I love how this whole conversation is based on which card ended up being faster...
The thread title is -
Which GPU do you think have aged the worst in the last 3 years?​
So the conversation is exactly on topic, what's the problem? Also I'd much rather have a GPU that started out a bit slower than the competition and ended up being faster and better over the course of ownership. That is another way of stating longevity.

A perfect example is the 7970 vs. the GTX680, one was a "now" architecture (that skimped on the memory) the other forward thinking.
 

tential

Diamond Member
May 13, 2008
7,348
642
121
The thread title is -
Which GPU do you think have aged the worst in the last 3 years?​
So the conversation is exactly on topic, what's the problem? Also I'd much rather have a GPU that started out a bit slower than the competition and ended up being faster and better over the course of ownership. That is another way of stating longevity.

A perfect example is the 7970 vs. the GTX680, one was a "now" architecture (that skimped on the memory) the other forward thinking.

So why isn't the portion in which the GPU is slower than the competition (The beginning) being factored into this equation?

What if your GPU is slower than the competition for 3 years, then in the third year, it's faster. Is that really a good value? What if it's 4 years?

Time spent being faster is a huge part of the equation.... Time being available counts as well. If I can buy a GPU, it's better than a GPU that is in the future that I can't buy, as that's performance I can use now. Time matters to play as time is money/value.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Have you seen any gpu values move in such a way in 2 years in that they are now useless?

Who is still using a GPU they bout 2+ years ago in this thread? Only person I'd guess is RS who said he went to consoles and bought a 360 recently.

AMD is riding so high on the DX12 performance gains you got a post saying "yeah I'd buy a slower card if it ends up faster."

/facepalm

Because AMD's previous cards ended up faster than NV's card in the past. AMD sowed the seeds for their current high, kudos to them, but this doesn't change anything.

Guess I should pick my pom-poms back up because the card I bought in 2012 and sold in 2014 is NOW faster than the competition.

No one says "I buy a slower card today if it's ends up faster tomorrow." I wonder if these people also buy AMD CPUs because, hey they're slower today, but you never know about tomorrow.
 
Feb 19, 2009
10,457
10
76
Who is still using a GPU they bout 2+ years ago in this thread?

Still got R290s and R290X. Recently I still had the 7950.

Shocking that the 7950 still pulls 60+ fps in Star Wars Battlefront 1080p maxed settings.

Re the slower then but faster now, the example of the 7970 vs 680, the 680 was slightly faster for about 3 months then AMD's drivers took care of that, it became even. Then the Ghz ed was released and it became faster. Or if you people who love to tout OC, from day 1 the 7970 OC was faster vs 680 OC. Heck, even [H] with their sketchy bias found OC 7950 matching OC 680 back then.

In the context, R290/X vs 780/Titan/780Ti was similar, but they were cheaper.

The 780 is sad since I recall it was quite a lot faster than Tahiti when it debuted, but lately, I'm seeing a trend where new games come out and the 280X/7970Ghz is so close or match the 780.
 
Last edited:

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Still got R290s and R290X. Recently I still had the 7950.

These cards aren't even 2 years old yet.

Shocking that the 7950 still pulls 60+ fps in Star Wars Battlefront 1080p maxed settings.

And I got an HD 5870 in another rig, but I wouldn't say "I still use it" at least not in my primary rig. I also got an aging 4870 1GB (I don't sell/give away all my cards).

EDIT: To your edit:

Re the slower then but faster now, the example of the 7970 vs 680, the 680 was slightly faster for about 3 months then AMD's drivers took care of that, it became even. Then the Ghz ed was released and it became faster. Or if you people who love to tout OC, from day 1 the 7970 OC was faster vs 680 OC. Heck, even [H] with their sketchy bias found OC 7950 matching OC 680 back then.

I'm well aware of that. I OC'ed the balls out of my 7970, even more when I goet an Accelro for $15 from a nice forum member here. I loved my card. And it was amazing back then how a someone who slammed it for well over a year changed his tune ONLY after it got price cut, bundled with a handful of games, AND the first set of Never Settle drivers came out. Prior to that, it was an overpriced PoS.

In the context, R290/X vs 780/Titan/780Ti was similar, but they were cheaper.

The 780 is sad since I recall it was quite a lot faster than Tahiti when it debuted, but lately, I'm seeing a trend where new games come out and the 280X/7970Ghz is so close or match the 780.

I guess this is what happens when NV produces a newer uarch while AMD is still working on the same baseline. /shrug. I'm use to my card being slower than the competition at a certain point, I did buy exclusive AMD cards for over a decade :D My reasoning is still the same, I paid less or similar to what was on the shelf the day I bought.
 
Last edited:
Feb 19, 2009
10,457
10
76
These cards aren't even 2 years old yet.

And I got an HD 5870 in another rig, but I wouldn't say "I still use it" at least not in my primary rig. I also got an aging 4870 1GB (I don't sell/give away all my cards).

R290/X is 2 years now? Released October 2013.

Well I was playing Wither 3 on a 7950 rig upstairs only a few months ago. I also had the 670 which was it's competitor (slightly faster IIRC), and it struggled at the same settings the 7950 managed 45 fps (close in quality to maxed).

Obviously long term value doesn't matter for gamers who upgrade often.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
R290/X is 2 years now? Released October 2013.

Semantics aside, two years old next week.
http://www.anandtech.com/show/7457/the-radeon-r9-290x-review

Well I was playing Wither 3 on a 7950 rig upstairs only a few months ago. I also had the 670 which was it's competitor (slightly faster IIRC), and it struggled at the same settings the 7950 managed 45 fps (close in quality to maxed).

Perhaps I should re-word. I still play WoW on the 5870 when I'm in the basement (mother in law issues). Again, doesn't mean I'd consider it a card I really use.

Obviously long term value doesn't matter for gamers who upgrade often.

We're using day 1 MSRP and current reviews for this thread. It's obvious why. Which is weird because, well, who is still using the majority of cards talked about? I got two friends using the 660 Tis I unloaded. I don't think they are bending themselves into pretzels trying to argue the value of their cards. One bought it off me because I sold it to him for $50. Last game I heard he played was the new Gauntlet. These type of threads are just amusing I guess.
 

badb0y

Diamond Member
Feb 22, 2010
4,015
30
91
And you bought 2 of them?
I purchased one a few months after it came out and then I purchased the second one a bit later. I mean, I didn't know the performance was going to tank in the future. I don't have a crystal ball.
Question if you overclocked both 780's wouldn't that ~ = a gtx980ti?
No, my cards were tanking hard in Witcher 3. Even stock 980 Ti smacks the crap out of my SLi setup.
Or did you buy he 980ti for the 6gb for a 4k monitor?
Nope, my main monitor is a 1600p Dell monitor but I played the Witcher 3 on my TV which is 1080p. Either way neither of my monitors require 6 gigs of VRAM.
The witcher 3 was fixed I think a week later with a driver update for Kepler.
Yea, I read about that "fix". You see, I am not an idiot, it wasn't like I decided to buy a new GPU on a whim. My cards were exhibiting a pattern of crappy performance in modern games. I first noticed it in Assassin's Creed Unity when I would max the game out and still get some frame drops. I thought that was weird but ACU was notoriously buggy so I blamed the game engine. Same thing repeated with Far Cry 4 and Watch Dogs so I wrote off Ubisoft completely and blamed them for the performance problems. I was getting some performance problems in other games as well such as Divinity but I am not sure if those problems were because of the game, the drivers, or the multi-GPU config.

Then The Witcher 3 came out and I was pumped... I had been waiting for this game since I beat The Witcher 2 and I had all my save files backed up and everything and.... the game ran like shit on 1080p so I decided to get rid of them and pick up the 980 Ti and performance difference was crazy.

Later on there were issues with Batman Arkham Knight but at that point I had already upgraded my system and my system ran it decently well compared to older generations. Come to think of it a lot of these problematic games were using Gameworks as well :colbert: .

EDIT: Also single GPU > multi- GPU configs, that's pretty much established as far as I am concerned.
 

Gikaseixas

Platinum Member
Jul 1, 2004
2,836
218
106
Who is still using a GPU they bout 2+ years ago in this thread?

I still using my 7970 for every game on my steam library, including Project Cars with lower settings lol
Very capable card, great longevity indeed.
 

Gikaseixas

Platinum Member
Jul 1, 2004
2,836
218
106
What if your GPU is slower than the competition for 3 years, then in the third year, it's faster. Is that really a good value?

It is great value for people who don't upgrade as often as you or I do. Some people have 1080p monitors and GTX 670/680/770 or 7950/7970/7970 GE are more than capable to run most games at that resolution.
 

provost

Member
Aug 7, 2013
51
1
16
The thread title is -
Which GPU do you think have aged the worst in the last 3 years?​
So the conversation is exactly on topic, what's the problem? Also I'd much rather have a GPU that started out a bit slower than the competition and ended up being faster and better over the course of ownership. That is another way of stating longevity.

A perfect example is the 7970 vs. the GTX680, one was a "now" architecture (that skimped on the memory) the other forward thinking.

Pretty much this. Although I don't have any AMD card in use at the present time, I do have a number of Kepler and GK110 based highest end cards, including a few Titans. Although I can use the Gk110 Titans' DP for some applications occasionally, the "gaming value" , relative to AMD cards of the similar vintage, as my 690, Titans and
780 Ti haven't kept up (or as otherwise known as not competitive in the long run). Obviously, Nvidia is a business and this has to do with what's best for its shareholders, so I get the "business strategy" underpinning Nvidia's product management cycle.
But, this doesn't mean I like it, as a consumer/gamer.... Lol
 
Last edited:

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
Could be this whole perception was created due to the fact that Nvidia tackled AMD's high end (Tahiti) with their mid range (GK104) and won initially. Over time, the larger memory bus and overall larger amounts of hardware in the die was able to garner much more gains over time with driver improvements over Kepler. As some folks here are saying, 780Ti still offers respectable gameplay in comparison to the competition.
Problem is, some are making it sound like Kepler cards can no longer play games. It's pretty funny.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
Could be this whole perception was created due to the fact that Nvidia tackled AMD's high end (Tahiti) with their mid range (GK104) and won initially. Over time, the larger memory bus and overall larger amounts of hardware in the die was able to garner much more gains over time with driver improvements over Kepler.
Lets say it is true...

As some folks here are saying, 780Ti still offers respectable gameplay in comparison to the competition.

... then why it didn't happened in this example where AMD tackled NV's high end (gk110) with their mid range (hawaii).

Some designs have more bottlenecks than the others. Some have functions that are yet to be used.
 

zlatan

Senior member
Mar 15, 2011
580
291
136
Kepler is not bad. It just need some special optimization and the driver have to use a very good register allocation strategy to utilize all CUDA cores. Also one SMX has just four schedulers and each scheduler can handle 32 threads (for WARP size). This is only 128 threads in an SMX, so only 128 CUDA cores can be used. The program will need some optimization to find some independent instructions in order to utilize all the 192 CUDA cores.
The closed-source middlewares are the problems, for example GameWorks. It don't have a public source code, so the devs may not able to optimize the shaders for Kepler. And this will limit the optimal ALU usage, so the hardware will lose some performance. But as you can see on the tests the non-Nvidia titles are good on Kepler, the latest example is SW Battlefront.
It is important to understand that GameWorks is not made for the players. This middleware is built for a special business strategy to limit the performance on the older hardwares. The main reason why it don't have a public source code is this, because the devs can ad that performance back, but Nvidia don't want it. They want to sell the newer generation.
 
Feb 19, 2009
10,457
10
76
Kepler is not bad. It just need some special optimization and the driver have to use a very good register allocation strategy to utilize all CUDA cores. Also one SMX has just four schedulers and each scheduler can handle 32 threads (for WARP size). This is only 128 threads in an SMX, so only 128 CUDA cores can be used. The program will need some optimization to find some independent instructions in order to utilize all the 192 CUDA cores.

The closed-source middlewares are the problems, for example GameWorks. It don't have a public source code, so the devs may not able to optimize the shaders for Kepler. And this will limit the optimal ALU usage, so the hardware will lose some performance. But as you can see on the tests the non-Nvidia titles are good on Kepler, the latest example is SW Battlefront.

It is important to understand that GameWorks is not made for the players. This middleware is built for a special business strategy to limit the performance on the older hardwares. The main reason why it don't have a public source code is this, because the devs can ad that performance back, but Nvidia don't want it. They want to sell the newer generation.

It seems I am not the only one to notice that Kepler performs GREAT in games as long as NV is NOT involved in its development. :)

Oh, what you say, make too much sense its going to hurt some folks who still don't believe GameWorks being closed source is bad for gamers/devs.
 

tential

Diamond Member
May 13, 2008
7,348
642
121
Could be this whole perception was created due to the fact that Nvidia tackled AMD's high end (Tahiti) with their mid range (GK104) and won initially. Over time, the larger memory bus and overall larger amounts of hardware in the die was able to garner much more gains over time with driver improvements over Kepler. As some folks here are saying, 780Ti still offers respectable gameplay in comparison to the competition.
Problem is, some are making it sound like Kepler cards can no longer play games. It's pretty funny.

I don't buy my GPU to fight another GPU from the other vendor. I haven't purchased an Nvidia GPU since my laptop, and don't intend to purchase an Nvidia GPU due to Freesync being available in monitor sizes I actually want.

That said, this whole conversation seems framed to bash Kepler because AMD has "gotten faster".

I could just as easily make a thread saying "Which GPU had the worst launch drivers in the last 3 years?"
Then it becomes an AMD driver bashing thread.

The fact that this thread is completely ignoring gaming performance at anything other than the tail end of the GPU lifespan speaks volumes as to the goal of the thread.
 
Feb 19, 2009
10,457
10
76
@tential
You forgot that Tahiti performed very well when it was released. The 680 came after it, with the goal post known, despite that, it was barely faster (single digits, 5% IIRC) when it launched. OC v OC, the 7970 beat it. Back then when some of you thought OC wasn't an important factor! AMD got the lead with the Ghz ed and never looked back since.

R290 on release was close to 780 in performance. R290X was around 10% behind 780Ti. But here's the kicker, R290 was $100 cheaper and R290X was $200-$250 ($750 v $500?) cheaper.

Now the R290 routinely beats the 780 with a large gap, to a point where the 7970Ghz or R280X which used to be 20-25% behind the 780 is matching it. R290X routinely ties 780Ti or straight up beats it in modern titles. GCN has outgrown Kepler.

So your statement that we are ignoring gaming performance besides looking at the tail end of a GPU lifespan is false. At their prime, Tahiti and Hawaii were very competitive. This was reflected in the marketshare too, prior to Maxwell's release AMD was spiking up from 37%, I mean a 40:60 ratio is extremely good compared to where they are at now.

As to the current situation & moving forward, GCN will outgrow Maxwell for the following reason:

DX12 solves AMD's 2 biggest weakness which handicapped their performance in the DX10/11 era.

1. Driver overhead & single-threaded drivers. DX12 reduces overhead, has native multi-thread rendering.

2. GCN being designed for parallel execution of graphics & compute is unable to do so in a serial environment of DX10/11, really running crippled. It can run free with Async Compute usage in DX12.

Earlier in the year, I had made a comment that by this year's end, we'll see the R290 beating the 970 and R290X matching the 980. We're seeing that with recent games.

In the DX12 era, the R290 will match the 980 and the R290X will beat it. Obviously the refresh 390/X do a bit better.

When Artic Island & Pascal is here, we'll see how revolution the new architectures are, whether NV is all aboard the Async Compute train (more parallel engines). If not, then I still see GCN as being more future-proof, for such an old tech.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
I love how this whole conversation is based on which card ended up being faster...

So you guys care more about your GPU performance at the end of its life cycle, rather than the beginning of its life cycle (when you're turning down settings anyway and looking to upgrade).

That's wild.

Did you read the title of the thread? We get it. You don't like the logical conclusion of the train of thought provoked by the thread. That's okay. No one is making you post in this thread.
 

zlatan

Senior member
Mar 15, 2011
580
291
136
DX12 solves AMD's 2 biggest weakness which handicapped their performance in the DX10/11 era.

GCN biggest weakness is the shader compilation procedure in D3D. The D3D bytecode is a really outdated IR, and Microsoft complier optimizations are harmfull for the modern GCN-like designs. AMD need to deoptimize the code, to ensure a better compilation to the hardware. On Xbox One the same shader is nearly 20-25% faster compared to PC. With a more robust IR (perhaps SPIR-V?) GCN can easily win +10-15% performance in general.