AMD 6000 reviews thread

Page 23 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BFG10K

Lifer
Aug 14, 2000
22,674
2,824
126


Wow, even the number 3 card has 16GB VRAM and is faster than the 2080TI. And the $1000 6900XT matches the $1500 3090 in performance.

The 3000 parts don't look so hot now.

Post reviews edit:
It's astonishing what AMD have managed to achieve with both the Ryzen 5000 and the Radeon 6000, especially given the absolutely minuscule R&D budget and resources compared to nVidia/Intel. Lisa Su is definitely the "Steve Jobs" of AMD with such a remarkable turnaround.

6900XT:
(It's absolutely amazing to see AMD compete with the 3090)


 
Last edited:

Mopetar

Diamond Member
Jan 31, 2011
7,797
5,899
136
If they take their time to do a good implementation that doesn't completely yank performance that's far better than just slapping something together just so you can say you have it for launch.
 

JPB

Diamond Member
Jul 4, 2005
4,064
89
91
RX 6700 XT incoming ?

AMD Radeon RX 6700 XT: Navi 22 GPU + 12GB GDDR6 = mid-range killer

We have a new tease from Patrick Schur, who tweeted some details on the Navi 22 GPU power targets -- which should see the new Radeon RX 6700 series arriving as an amazing value for money set of cards. We're looking at around 40 Compute Units, and the Navi 22 XT GPU powering the higher-end Radeon RX 6700 XT.

AMD will keep the core count on the new Radeon RX 6700 XT the same as the Radeon RX 5700 XT, but it will greatly benefit from power efficiency, higher clock speeds, and brute performance -- as well as a purported 12GB of RAM -- up from 8GB on the RX 5700 XT.

We are looking at the Navi 22 XT using 186-211W of power, compared to the 225W on the Navi 10-based Radeon RX 5700 XT. The cut-down Navi 21 XL-based Radeon RX 6700 (non-XT) will reportedly use somewhere between 146-156W which is much less than the 180W used on the Radeon RX 5700.

We are to expect the Radeon RX 6700 XT to ship with a faster 192-bit memory bus, so we can expect 6GB or 12GB of GDDR6 -- with the Radeon RX 6800 and Radeon RX 6800 XT graphics cards both packing 16GB of RAM, then I would expect the Radeon RX 6700 series cards to all pack 12GB.

As for when we'll see AMD launch its new Radeon RX 6700 series cards, it seems like January 2021 is the window -- so expect some huge releases in January 2021 as NVIDIA has its new mid-range GeForce RTX 3060 Ti right around the corner...
 

JPB

Diamond Member
Jul 4, 2005
4,064
89
91
I've been thinking about going to a RX5600 XT. But if there will be a RX6600 XT, I am going to wait. Been with Nvidia for FAR too long now. 6 years is enough.

Considered going with a Ryzen 5600X and a RX5600 XT.
 

Hitman928

Diamond Member
Apr 15, 2012
5,182
7,633
136
Nah, Cyberpunk will not get RT on RDNA2 GPUs at launch, just like Godfall won't get RT on NVIDIA GPUs at launch. CDPR gave an official statement to Computerbase.


Yeah, it's been a confusing ride. Initially someone from CDPR said RT wouldn't be supported on AMD cards. Then they gave an official statement saying it would. Now they released another official statement saying not at launch. Either way, unlike the first reports, it will be supported on AMD cards, just not at launch.
 

Glo.

Diamond Member
Apr 25, 2015
5,661
4,419
136
I've been thinking about going to a RX5600 XT. But if there will be a RX6600 XT, I am going to wait. Been with Nvidia for FAR too long now. 6 years is enough.

Considered going with a Ryzen 5600X and a RX5600 XT.
In Poland there will be promotion on wednesday related to Black Week sales, and I can snag a Core i3 10100F for 45€.

Im going for that plus 5600 XT, most likely.
 
  • Wow
Reactions: lightmanek

TESKATLIPOKA

Platinum Member
May 1, 2020
2,329
2,811
106
RX 6700 XT incoming ?

AMD Radeon RX 6700 XT: Navi 22 GPU + 12GB GDDR6 = mid-range killer

We have a new tease from Patrick Schur, who tweeted some details on the Navi 22 GPU power targets -- which should see the new Radeon RX 6700 series arriving as an amazing value for money set of cards. We're looking at around 40 Compute Units, and the Navi 22 XT GPU powering the higher-end Radeon RX 6700 XT.

AMD will keep the core count on the new Radeon RX 6700 XT the same as the Radeon RX 5700 XT, but it will greatly benefit from power efficiency, higher clock speeds, and brute performance -- as well as a purported 12GB of RAM -- up from 8GB on the RX 5700 XT.

We are looking at the Navi 22 XT using 186-211W of power, compared to the 225W on the Navi 10-based Radeon RX 5700 XT. The cut-down Navi 21 XL-based Radeon RX 6700 (non-XT) will reportedly use somewhere between 146-156W which is much less than the 180W used on the Radeon RX 5700.

We are to expect the Radeon RX 6700 XT to ship with a faster 192-bit memory bus, so we can expect 6GB or 12GB of GDDR6 -- with the Radeon RX 6800 and Radeon RX 6800 XT graphics cards both packing 16GB of RAM, then I would expect the Radeon RX 6700 series cards to all pack 12GB.

As for when we'll see AMD launch its new Radeon RX 6700 series cards, it seems like January 2021 is the window -- so expect some huge releases in January 2021 as NVIDIA has its new mid-range GeForce RTX 3060 Ti right around the corner...
What Patrick Schur wrote is not the whole card(TBP), but GPU+VRAM(TGP), the same situation was with what he leaked about Big Navi. So expect comparable TBP to Navi10, unless these values are not for reference design but only OC AIB models.
 
  • Like
Reactions: Tlh97 and JPB

Mopetar

Diamond Member
Jan 31, 2011
7,797
5,899
136
Navi 22 should give us a better understanding of how well the infinity cache works because we'll have the ability to compare a 40 CU card to its predecessor and adjust the clocks to similar levels between both. The bus width on Navi 22 will be smaller, but if they use the same 1600 MHz GDDR6 memory as Navi 21 it will help compensate for that to some degree. I think both of the 5600 cards have a 192-bit bus so those could always be used for comparison as well against a similarly cut down Navi 22 part.

Based on what we've seen with Navi 21, we'll probably see clock speeds somewhere in the range of 20% - 30% higher, and the inclusion of infinity cache will likely help compensate for the smaller bus width or any memory bottleneck that may materialize as the clocks get pushed higher. I have a sneaking suspicion that Navi 22 is going to be a real monster in 1080p and 1440p due to the infinity cache. 5700XT performance with a 20% boost due to the aforementioned reasons puts it pretty close to 2080 Super performance in general and lets it even match a 2080 Ti in a number of titles at those lower resolutions.
 
  • Like
Reactions: Tlh97

Glo.

Diamond Member
Apr 25, 2015
5,661
4,419
136
Based on what we've seen with Navi 21, we'll probably see clock speeds somewhere in the range of 20% - 30% higher, and the inclusion of infinity cache will likely help compensate for the smaller bus width or any memory bottleneck that may materialize as the clocks get pushed higher. I have a sneaking suspicion that Navi 22 is going to be a real monster in 1080p and 1440p due to the infinity cache. 5700XT performance with a 20% boost due to the aforementioned reasons puts it pretty close to 2080 Super performance in general and lets it even match a 2080 Ti in a number of titles at those lower resolutions.
Mac OS power tables suggest 2.5 GHz Turbo clock for Navi 22.

And looking at how MacOS tables were conservative in terms of clock speeds, for N21 I'd say we're looking at quite higher clock speeds for N22.
 

Mopetar

Diamond Member
Jan 31, 2011
7,797
5,899
136
Mac OS power tables suggest 2.5 GHz Turbo clock for Navi 22.

And looking at how MacOS tables were conservative in terms of clock speeds, for N21 I'd say we're looking at quite higher clock speeds for N22.

The Navi 22 tables might be more at the limit of what's feasible simply because the TDP will still be lower than Navi 21 with more conservative clocks. Apple tends to prefer their products remain thin over building them to handle cooling anything that's pumping out 300W worth of heat.

There isn't any reason to suspect that Navi 22 can clock higher than Navi 21 without hitting the same wall that all RDNA2 will hit outside of some artificial limit put in place to restrict power draw and keep the card within the limits of the cooling solution. Someone got their 6800XT to 2.65 GHz so higher clock speeds are certainly possible, but that's probably getting close to the limits of what the architecture and process are capable of, or at least without getting a very select die and using water cooling.

Even a 2.5 GHz boost clock is over 30% greater than the 5700XT which is going to be quite impressive if the inclusion of the Infinity Cache is enough to prevent or alleviate any memory bottlenecks that would prevent the card from making use of all of those extra clocks. I'll assume that it is at least to some degree or AMD/Apple wouldn't bother targeting those clock speeds.
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,329
2,811
106
I will copy my reply from another thread to this one.
I asked an owner of RX 6800XT named HEAD from another forum(pctforum.tyden.cz) to test his card at 1800mhz and default clockspeed, here are his findings.

Everything is the average value and the tested game was Control:
Frequency: 1810Mhz(100%) vs 2230Mhz(123%)
Performance: 94.9FPS(100%) vs 105.9FPS(111.6%)
Power consumption: 170W(100%) vs 255W(150%)
Voltage was set to 0.8V, he said It can't be set any lower and I am not sure If the power consumption was for the whole card, I hope It was.
It looks like AMD didn't really lie in the graph about power efficiency at lower clockspeed.
It looks like AMD could have a very good lineup in laptops, finally!

P.S. For comparison, his Undervolting gave him 108FPS with 213W power consumption and the average clockspeed was 2251Mhz.
As you can see a 23% increase in clockspeed resulted in only 12% better perfomance, so even If N22 will clock 30-35% higher than RX 5700XT, It's still questionable how much performance gain we will see and the increased clockspeed has a very bad effect on the power consumption considering how little performance you gained in return.

P.S. Thanks scineram for downvoting 8 of my comments in a row in the other thread about RDNA2 within half an hour without a single reply to any of these comments. Next time at least make an effort and reply to them, so I will know what was so wrong about them. :mad:
 
Last edited:

Thunder 57

Platinum Member
Aug 19, 2007
2,647
3,706
136
RX 6700 XT incoming ?

AMD Radeon RX 6700 XT: Navi 22 GPU + 12GB GDDR6 = mid-range killer

We have a new tease from Patrick Schur, who tweeted some details on the Navi 22 GPU power targets -- which should see the new Radeon RX 6700 series arriving as an amazing value for money set of cards. We're looking at around 40 Compute Units, and the Navi 22 XT GPU powering the higher-end Radeon RX 6700 XT.

AMD will keep the core count on the new Radeon RX 6700 XT the same as the Radeon RX 5700 XT, but it will greatly benefit from power efficiency, higher clock speeds, and brute performance -- as well as a purported 12GB of RAM -- up from 8GB on the RX 5700 XT.

We are looking at the Navi 22 XT using 186-211W of power, compared to the 225W on the Navi 10-based Radeon RX 5700 XT. The cut-down Navi 21 XL-based Radeon RX 6700 (non-XT) will reportedly use somewhere between 146-156W which is much less than the 180W used on the Radeon RX 5700.

We are to expect the Radeon RX 6700 XT to ship with a faster 192-bit memory bus, so we can expect 6GB or 12GB of GDDR6 -- with the Radeon RX 6800 and Radeon RX 6800 XT graphics cards both packing 16GB of RAM, then I would expect the Radeon RX 6700 series cards to all pack 12GB.

As for when we'll see AMD launch its new Radeon RX 6700 series cards, it seems like January 2021 is the window -- so expect some huge releases in January 2021 as NVIDIA has its new mid-range GeForce RTX 3060 Ti right around the corner...

I know I'm not the only one that is happy that the "Mid-range killer" and "Budget killer" are now looking at $400 USD. It wasn't but a few years ago that those types of cards fell to the RX 480/580 and 1060 GB, all for about half. The 480's and 580's were available for $200 with 4GB or $240 for 8GB. 1060 6GB was about the same.

Those were advertised as VR-ready and 1440p-ready cards. I played at 1440p on an RX 480 for a few years. I had to dial a few things back but it was very serviceable. Now, just 3-4 years later, the price to entry is twice as much?

I've been thinking about going to a RX5600 XT. But if there will be a RX6600 XT, I am going to wait. Been with Nvidia for FAR too long now. 6 years is enough.

Considered going with a Ryzen 5600X and a RX5600 XT.

The RX 5600 XT is a great card that I nearly got. Pretty much on par with an RX 5700. Just consider the games you play and the resolution. The 6GB VRAM might become a factor at some point. Considering how popular 6GB cards are though and if the rumors of the 6GB 5700 XT are true, you should be fine.
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,329
2,811
106
I know I'm not the only one that is happy that the "Mid-range killer" and "Budget killer" are now looking at $400 USD. It wasn't but a few years ago that those types of cards fell to the RX 480/580 and 1060 GB, all for about half. The 480's and 580's were available for $200 with 4GB or $240 for 8GB. 1060 6GB was about the same.

Those were advertised as VR-ready and 1440p-ready cards. I played at 1440p on an RX 480 for a few years. I had to dial a few things back but it was very serviceable. Now, just 3-4 years later, the price to entry is twice as much?
$199 was only RX480 4GB, RX 480 8GB was $239 and GTX1060 6GB was $249 or FE $299, but you are right that the prices are still much higher, although It's better than It was with Turing.
The price is still hight, but If that "Budget Killer => RTX 3060Ti" wil cost "only" $399, then the price is not that bad considering It performs as RTX2080 Super originally costing $699.
I checked an old TPU review for RX 480 and there were 3 games out of 16 where It barelly managed 25FPS at 1440p. On the other hand RTX3060Ti which is also aimed at 1440p should manage at least 50FPS based on RTX3070 scores which were over 60FPS even in the most demanding ones tested on TPU in 1440p.
I think RX 6700XT should be somewhere around the level of RTX 3060ti, the number of CU vs SM is comparable 40 vs 38.
 
Last edited:

Gideon

Golden Member
Nov 27, 2007
1,608
3,573
136
I know I'm not the only one that is happy that the "Mid-range killer" and "Budget killer" are now looking at $400 USD. It wasn't but a few years ago that those types of cards fell to the RX 480/580 and 1060 GB, all for about half. The 480's and 580's were available for $200 with 4GB or $240 for 8GB. 1060 6GB was about the same.

Those were advertised as VR-ready and 1440p-ready cards. I played at 1440p on an RX 480 for a few years. I had to dial a few things back but it was very serviceable. Now, just 3-4 years later, the price to entry is twice as much?



The RX 5600 XT is a great card that I nearly got. Pretty much on par with an RX 5700. Just consider the games you play and the resolution. The 6GB VRAM might become a factor at some point. Considering how popular 6GB cards are though and if the rumors of the 6GB 5700 XT are true, you should be fine.
Yeah, the sweetspot for GPUs was around $200-300 for almost 20 years.

Ever since the GTX 9xx series it started to creep somewhat upwards (GTX 960 was trash and GTX 970 in reality cost around 350$ or ~375€ here ). After the release of the GTX 1060 (which was in July 2016) things have really slowed down. Yeah 1660 and 5600XT were decent improvements (3 years later!) but not all that exiting. Not having the DX12 ultimate feature set will not help (never mind raytracing, just VRR and Mesh shaders will make these cards less viable once optimized PS5/XSX ports start pouring in).

The 2060 SUPER is arguably the current sweetspot, and that's a 400$ card. The upcoming 3060 Ti looks to be the successor and the 5700 (non-XT) will probably also be in the ballpark.

Hopefully the RX 6500XT and RX 6600 will finally also deliver good upgrades in that price-range.
 
Last edited:

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
Yeah, the sweetspot for GPUs was around $200-300 for almost 20 years.
<...>
The 2060 SUPER is arguably the current sweetspot, and that's a 400$ card.

I think 400$ today is ~300 in 2010 era and easily 200$ in 2000 era, money has lost a lot of value during these and will loose at least 30-40% in next few years.

So comparing absolute $ values does not work beyond generation or two. And don't bother with official statistics of inflation either, those tools are meant for potatoes and cars.
 

Mopetar

Diamond Member
Jan 31, 2011
7,797
5,899
136
$199 was only RX480 4GB, RX 480 8GB was $239 and GTX1060 6GB was $249 or FE $299, but you are right that the prices are still much higher, although It's better than It was with Turing.

Polaris was only sanely priced early on after its release until the mining boom. Even 470's were shooting up in price to over $300 when a few months before they were getting down to $130 or less with 4 GB models. The mining boom didn't hit Nvidia quite as hard, but they still saw retail prices jump up on a few of their cards.

After the release of the GTX 1060 (which was in July 2016) things have really slowed down.

The other reason we saw a lot of stagnation was that TSMC's 16nm stuck around for a while, almost as long as 28nm. Sure, Nvidia used the 12nm node for Turing, but that was essentially just a rebranding and didn't improve the density. Meanwhile AMD only released midrange consumer cards (I'm not counting Radeon VII) on TSMC's 7nm so they weren't moving the needle on absolute performance either. Turing focused more on adding in RT than anything else so while there were some improvements over the Pascal architecture, there wasn't as much focus there as historically was the case.
 

fleshconsumed

Diamond Member
Feb 21, 2002
6,483
2,352
136
$199 was only RX480 4GB, RX 480 8GB was $239 and GTX1060 6GB was $249 or FE $299, but you are right that the prices are still much higher, although It's better than It was with Turing.
The price is still hight, but If that "Budget Killer => RTX 3060Ti" wil cost "only" $399, then the price is not that bad considering It performs as RTX2080 Super originally costing $699.
I checked an old TPU review for RX 480 and there were 3 games out of 16 where It barelly managed 25FPS at 1440p. On the other hand RTX3060Ti which is also aimed at 1440p should manage at least 50FPS based on RTX3070 scores which were over 60FPS even in the most demanding ones tested on TPU in 1440p.
I think RX 6700XT should be somewhere around the level of RTX 3060ti, the number of CU vs SM is comparable 40 vs 38.
One, before the mining boom hit you could buy 480/470 at deep deep discount, if I remember correctly you could get rx470 4gb for about $120 and rx470 8gb for around $150. Two, it seems that everyone is ignoring that we used to get more performance at the same price with new gen, I know that has not been the case for the past 4 years, but forgive me for not jumping in joy for prices kind of sort of returning back to sanity.
 

MrTeal

Diamond Member
Dec 7, 2003
3,554
1,658
136
I think 400$ today is ~300 in 2010 era and easily 200$ in 2000 era, money has lost a lot of value during these and will loose at least 30-40% in next few years.

So comparing absolute $ values does not work beyond generation or two. And don't bother with official statistics of inflation either, those tools are meant for potatoes and cars.
If you want to say inflation isn't a good indicator for computer hardware, compare it against the other components of the system.

In 2000 a value gaming system had a $200 processor, a $160 MB, $125 for RAM, $140 for storage, and $170 for case/PSU/other. The GPU there was the Geforce DDR, the best GPU available at the time and $270. That system (minus monitor and speakers) was a bit over a grand back then, and a top of the line GPU was about 25% of the system cost.
The GPU in the high end system was the AV version of the same card at $330, and it made up 15% of the cost of the tower.

In 2010, a midrange system was again a $200 processor, had a $200 midrange GPU (6850) and a total price of ~$900. If you wanted the true midrange card there, you could add on $40 and pick up a 6870, or grab the $260 GTX 470. Even the 470 would have made up again about 25% of the system cost.

AT doesn't have a holiday guide out yet for this year, but a pre-Zen3 midrange gaming build would probably look something like this.
Again, the system would be a hair over $1000, but now the GPU is 40% of the cost of the system.

In midrange systems we've gone from spending 25% of the system cost on a top of the line GPU 20 years ago, to 25% for a x70 level enthusiast card 10 years ago, to spending 40% of the total on a upper midrange card.

We'll see what it looks like in the spring once you can more easily buy things again, but I'd argue the price of entry for an upper midrange card has continued to go up, even relative to the cost of computer hardware. We'll see what the 3060 Ti and 6700 come in at, but it doesn't look to be getting better this generation either.
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,329
2,811
106
One, before the mining boom hit you could buy 480/470 at deep deep discount, if I remember correctly you could get rx470 4gb for about $120 and rx470 8gb for around $150. Two, it seems that everyone is ignoring that we used to get more performance at the same price with new gen, I know that has not been the case for the past 4 years, but forgive me for not jumping in joy for prices kind of sort of returning back to sanity.
1.) What I wrote were prices at the release date and not discount prices.
2.) I think everyone knows the prices are high compared to what It used to be, but what can you do about It except not buying at all. And I am happy that the prices returned to a more sane levels, even If It's still quite high.

Many of you may dissagree, but If you don't mind playing only at FullHD then you can save a lot of money for a new GPU by not buying the faster ones. For example RX 5500XT is still capable to play every tested game at max settings in FullHD according to TPU review. Control was the most demanding game tested and still managed 36FPS on average, that's not so bad for the weakest RDNA1 card which started selling at $169-199 for the 4-8GB version. Now the question is how much will they ask for the cutdown N23 and how much more performance will It provide.
 
Last edited:

fleshconsumed

Diamond Member
Feb 21, 2002
6,483
2,352
136
1.) What I wrote were prices at the release date and not discount prices.
2.) I think everyone knows the prices are high compared to what It used to be, but what can you do about It except not buying at all. And I am happy that the prices returned to a more sane levels, even If It's still quite high.

Many of you may dissagree, but If you don't mind playing only at FullHD then you can save a lot of money for a new GPU by not buying the faster ones. For example RX 5500XT is still capable to play every tested game at max settings in FullHD according to TPU review. Control was the most demanding game tested and still managed 36FPS on average, that's not so bad for the weakest RDNA1 card which started selling at $169-199 for the 4-8GB version.
True, those were discount prices, but not only cards in the past were cheaper, they also got steep discounts several times a year. We didn't have any discounts on any of the generation since then, no 10XX discounts, no 20XX discounts, no VEGA discounts, no NAVI discounts.

So yeah, like you said, it's either pay up or walk away. I've been dunking a lot of my fun money into AMD CPUs and photography, none of that went into videocards since they've been such a poor value for money over the past 4 years. Hopefully that'll change in Q1 2021 with NAVI22/3060TI series.
 

Det0x

Golden Member
Sep 11, 2014
1,027
2,953
136
AMD’s Radeon RX 6800 stable with continuous 2.55 GHz and RX 6800 XT overclocked up to 2.5 GHz – Thanks to MorePowerTool and board partner BIOS

Result.jpg

6800 @ 289watt
Now the MPT and the XT-BIOS will be used for the last centimetres and for the (temporary) first place in the ranking! With 16939 points, however, this is also the end of the story. More simply does not work with air and AMDs restrictions, as long as you don’t physically put your hand on it or shock freeze the good part. 14.7 Percent (more performance) at the end of the day are a real gift, which means another 4 percent more. Not bad either!

6800 XT @ 350watt
With the RX 6800XT it may be a few watts more and the crowbar turns dark red. With up to 350 watts, i.e. over 17 percent more electrical power consumption, you can achieve up to 7 percent more performance. Whether that will let you sleep peacefully with a green conscience, that remains to be seen. But at least it works. Also a realization that one can live with.
igorslab said:
Yeah, there’s a lot of fun, with both cards and especially with the smaller Radeon RX 6800! The MPT only works in a roundabout way, but even so, the cards can be brought to their physically acceptable maximum under normal conditions with air cooling. But that’s enough, because you don’t get the next fastest card in the overtaking process. However, the RX 6800 XT then wins against the GeForce RTX 3080 FE in almost all gaming benchmarks. If anyone needs it…
 
Last edited:

Mopetar

Diamond Member
Jan 31, 2011
7,797
5,899
136
One, before the mining boom hit you could buy 480/470 at deep deep discount, if I remember correctly you could get rx470 4gb for about $120 and rx470 8gb for around $150. Two, it seems that everyone is ignoring that we used to get more performance at the same price with new gen, I know that has not been the case for the past 4 years, but forgive me for not jumping in joy for prices kind of sort of returning back to sanity.

Yeah the 400 series was starting to see discounts just to clear out inventory for the 500 series which we really just rebranded 400-series cards anyhow so the prices were really low right up until they weren't. However, I don't really agree with the general sentiment that performance has stagnated or that there was this magical era where $300 could get mind-blowing performance that's locked behind $1,000 GPUs due to corporate greed. Consider the GeForce x70 card and what that's been priced at over the generations:

470 $350
570: $350
670: $400
770: $400
970: $330
1070: $400 ($450 FE card)
2070: $600
3070: $500

The price moves around a lot based on a lot of factors. It's also worth noting that the decade old 470 would cost $420 in today's dollars due to inflation. Performance expectations have also changed a lot over time as well. With the older cards, you pretty much had to run two of them in Crossfire or SLI if you wanted to get acceptable average frame rates of 60 FPS (which most people now consider a minimum acceptable amount you never want to dip under) even at 1080p. If we compare a 470 vs. a 3070 at the different resolutions commonly used in contemporary games of the time we see something like this:

GTX 470 - Battlefield Bad Company 2 (results taken from AT review)
1680 x 1050 : 67.3 FPS
1920 x 1200: 43.4 FPS
2560 x 1600: 32.7 FPS

RTX 3070 - Battlefield V (results taken from TPU review of 3070 FE)
1920 x 1080: 170.7 FPS
2560 x 1440: 136.9 FPS
3840 x 2160: 88.1 FPS

Comparing the cards in today's dollars, a 3070 is 19% more expensive than a 470. However, it's obvious that at all of the main resolutions you get a significantly better experience. If you look at it in terms of low, medium, and high resolutions, the 3070 offers about three times the FPS across the board.

Turing was just not a good value, but that's largely down to a complete lack of competition to keep Nvidia's prices in check. However, even though the prices have increased even when adjusting for inflation, I don't think they've really left the realm of sanity. Entry level cars today aren't really any more expensive than they were historically. Even though the prices seem higher, adjusting for inflation takes care of most of that and most of them come standard with what were previously luxury features and in general everything is far better from a safety perspective.
 

lightmanek

Senior member
Feb 19, 2017
387
754
136
It seems that my card is one of the winners in Silicon Lottery!
I just played a bit with undervolting and I can maintain full 2600MHz/1100MEM at 115% Power with 0.925V instead of default 1.025V o_O (EDIT: something wired going on, as according to GPU-Z and AMD Driver, my vGPU stays at 1.025v no matter where I put slider for voltage, but the card will crash in 3D if I go below 0.9v and performance changes as seen on 3DMark. Auto Undervolting works as expectd and applies lower vGPU, but at stock clocks ...)

Checked with FireStrike and yes, performance is going up as more demanding first test average GPU clock went from 2250MHz to 2350MHz, second test averages 2550MHz.
https://www.3dmark.com/compare/fs/24104371/fs/24104234/fs/24084536
 
Last edited:

TESKATLIPOKA

Platinum Member
May 1, 2020
2,329
2,811
106
It seems that my card is one of the winners in Silicon Lottery!
I just played a bit with undervolting and I can maintain full 2600MHz/1100MEM at 115% Power with 0.925V instead of default 1.025V o_O (EDIT: something wired going on, as according to GPU-Z and AMD Driver, my vGPU stays at 1.025v no matter where I put slider for voltage, but the card will crash in 3D if I go below 0.9v and performance changes as seen on 3DMark. Auto Undervolting works as expectd and applies lower vGPU, but at stock clocks ...)

Checked with FireStrike and yes, performance is going up as more demanding first test average GPU clock went from 2250MHz to 2350MHz, second test averages 2550MHz.
https://www.3dmark.com/compare/fs/24104371/fs/24104234/fs/24084536
If you have time, can you please check the performance in some modern benchmark by setting different frequencies to the GPU, like default, 2000, 1800, 1600, 1400MHz? I already posted what I got from another user and the result was that decreasing clockspeed by ~19% resulted only in 10.5% lower performance. Why do I want It? Because of the mobile versions and It looks interesting.
 
Last edited: