AMD 6000 reviews thread

BFG10K

Lifer
Aug 14, 2000
22,709
2,972
126


Wow, even the number 3 card has 16GB VRAM and is faster than the 2080TI. And the $1000 6900XT matches the $1500 3090 in performance.

The 3000 parts don't look so hot now.

Post reviews edit:
It's astonishing what AMD have managed to achieve with both the Ryzen 5000 and the Radeon 6000, especially given the absolutely minuscule R&D budget and resources compared to nVidia/Intel. Lisa Su is definitely the "Steve Jobs" of AMD with such a remarkable turnaround.

6900XT:
(It's absolutely amazing to see AMD compete with the 3090)


 
Last edited:

amrnuke

Golden Member
Apr 24, 2019
1,181
1,772
136
Last time I checked the 6800 is starting at $579. The 3070 is starting at $499.

The 2080ti was the most expensive flagship until the 3090. Both ridiculous bad values.

Cards like the 5850 comfortably beat last gen flagships for $250. 7870 close to the 6970 and beat 5870.

GTX 660 close to the GTX 580 performance and that was on a generation where NVIDIA decide to call the 104 die a flagship instead of a midrange.

Even as recently as 2016 cards like the 1060 and 480 competed well with the 980 and 980ti for $250.

Mid range performance has been going down compared to previous flagships while the price has been going up.

All these 3070/3080 and 6800XT/6800 are $100-200 overpriced.
You mention that the 5850 comfortably beat last gen flagships for $250. It was actually $260, and it was 17% faster than the 4890 ($249) but 16% slower than the 4870x2 ($429). So it was slower than the previous generation's flagship.

The 7870 was $350 MSRP, 6970 was $369 MSRP, and the 5870 was $379 MSRP. The 7870 beat the 6970 by 10% and the 5870 by 20%. But the previous generation's flagship wasn't a 6970 or 5870, it was the 6990 or 5970, which beat the 7870 by 30-35% and 10% respectively.

1060 6GB AIBs were at the cheapest $250 but ranged up to $330 and were 15-20% slower than the 980 Ti, which MSRP'd at $649. But remember, it wasn't the top end card, the Titan X at $999 was the top card, and the 1060 6GBs were 25-30% slower than the Titan X. The 1060 6GBs were about on par with the 980, MSRP $549. In other words, a mid-range card from the 10xx generation was on par with a high-mid-range (or low-high-end) card from the 9xx generation.

If we look at Vega -> Navi and compare the 5600XT ($280) mid-range, it fell 25-30% shy of the previous generation's flagship $699 Radeon VII, but it's about on par with the Vega 56/64.

I am having a hard time seeing why there is so much gnashing of teeth over prices. Do people forget that the 6990 and GTX 590 were $699 in 2011 ($820 in today's $$) and the original Titan X was $999 ($1100 in today's $$)? Even the 5970 MSRP'd at $580, the GTX 580 at $500 (>$600 in today's $$).

The 3070/3080 and 6800/6800XT are only overpriced if they are mid-range cards, which I'd argue they aren't. Let's wait for the 6600XT before we start comparing current generation mid-range prices and performance to previous generations.

FWIW, I agree the 6800 is over-priced, but I feel $499-529 is a reasonable price for it.

6800XT at $649 is about right. And historically, so is the halo price of the 6900XT at $999.

This really has been par for the course since 2012 or so. If you look back all the way to at least the 600 series GeForce cards, it's been the case that you have an expensive halo card, and two top end cards priced really high just below it:

600 series - $400 for 670, $500 for 580, and $1000 for 690.
700 series - $649 for 780, $699 for 780 Ti, and $999 for Titan.
900 series - $549 for 980, $649 for 980 Ti, $999 for Titan X.
1000 series - $599 for 1080, $699 for 1080 Ti, $1200 for TITAN X

Same with AMD:
HD7000 series - $450 for 7950, $550 for 7970, $1000 for 7990
RX 200 series - $399 for 290, $549 for 290X, $1499 for 295X2
RX 300 series - $549 for R9 Fury, $649 for R9 Fury X, $1499 for the Pro Duo
RX 400 series - (aberration, the top card, the 480, was $239 but was slower than a 290X from three years prior)
RX 500 series - (same, RX 590 at $279 was slower than an R9 Fury X from three years prior)
Vega series - $399 for Vega 56, $499 for Vega 64, $699 for VII
RX 5000 series - $279 for 5600XT, $349 for 5700, $399 for 5700XT (again somewhat of an aberration without a halo card or the top tier performance to justify higher prices against the RTX 2000 series)

So what does this say?

For the last ~8 years, the script is the same - an expensive halo card and 1-2 high end cards right below it. This is the case with RDNA2 with the 6800 (~GeForce xx80), 6800XT (~GeForce xx80 Ti), and 6900XT (~GeForce Titan). We also see continuous progress in the mid-range (which I'll leave you to review in detail on your own), but in the past there was not the kind of leaps-and-bounds improvement you claim (mid-range beating the flagship from the previous generation). At best, the mid-range is about on par with the 2nd or 3rd best card from the previous generation.
 
Last edited:

Hitman928

Diamond Member
Apr 15, 2012
5,321
8,005
136
And among those that do need higher-end hardware and are quite popular but still use DX11 a Vega 64 is barely faster than a 980Ti. Case in point, Yakuza: Like a Dragon, Ghostrunner and Mafia Definitive Edition.

So my point still stands - that AMD does nothing to optimize DX11 performance barring a select few big-name titles.


Ghostrunner is a DX12 game, so I don't know why you mentioned it.

The Vega64 underperforms even at 4K in Yakuza, so it's not a DX11 cpu issue like you are claiming is the issue, it's just not performing well at all on that Vega card. Also, here's a 5700XT running it at 4K with 2x SSAA at around 60 fps with a Ryzen 2600, so either there is something wrong with that system you linked to, the Vega64 is bugged in this game, or whatever issue they had has been fixed.


Mafia Definitive Edition looks also to be performing much higher according to the youtube videos I can find:


So I don't know what to tell you. Out of a giant list of games, you picked 3 where 1 is actually a DX12 game and the other two seem to not be limited by your supposed DX11 CPU issue. Perhaps it's just time to move on?
 

Timorous

Golden Member
Oct 27, 2008
1,622
2,784
136
The thing is that AMD's 3080 results in Borderlands and Gears 5 are too low, for example in Gears TechPowerup achieved 84.4fps in Ultra preset with i9-9900K@ 5.0GHz. Also the game suite that AMD used is not exactly in Nvidia's favor. With more games tested I think the average results will be at least +2.5% in Nvidia's favor and we will have:

TPU do not test Bordelands with Badass mode and they use the same API for both cards. AMD said they used the best API for each card.

Cross comparing FPS numbers for different setups is a folly, don't do it. Far too many variables to account for.

Just remember when they showed Fury X and Radeon VII, in both cases according to AMD benchmarks they were faster than the GTX 980TI and RTX 2080 but in reality they were quite slower and they both failed miserably

They also showed the 5700XT as being faster than the 2070 and then when the real benchmarks came out the 5700XT was ahead by a few % more than AMD claimed in their event.

On top of that they avoided games like Death Stranding, F1 2020, Horizon:Zero Dawn where the 5700XT is faster than the 2070S @4k so they could have cherry picked a lot harder if they had wanted to.
 

amrnuke

Golden Member
Apr 24, 2019
1,181
1,772
136
SLI or CF solutions were always worse than single fast cards and often wouldn't work. I know at the time people weren't making charts with frame time variance but there is a reason SLI and CF were phased out.

The last 3-5 years have been extremely slow for GPU performance.
I'll simplify it since you seem to have forgotten all the arguments you've made in previous posts.

1) The 6800 isn't a mid-range card.

2) Mid-range performance advances at iso-price are larger than top-tier performance advances regardless of price:

Top-tier:
1080 Ti - mid 2015 - $699
3090 - late 2020 - $1500
3090 is twice as fast as 1080 Ti for twice the inflation adjusted price, over a 5 year period
* this assessment will change with the release of the 6800XT, which looks to double 5700XT performance for less than double the price in 1.5 years

Mid-range:
1060 6GB - mid 2016 - $250
5600XT - early 2020 - $280
5600XT is twice as fast at 1060 6GB for the same inflation adjusted price, over a 3.5 year period

3) The last 3-5 years have brought a doubling in 4K/1440 performance on the GPU side, for the mid-range that has occurred over only 3.5 years and for the top end that has occured over 5 years. For the mid-range, that averages 20-25% gains per year for the same inflation-adjusted price. For reference, on the CPU side, gains are about 15% per year for iso-price from AMD, and a little less than that from Intel.
 
  • Like
Reactions: Tlh97

DisEnchantment

Golden Member
Mar 3, 2017
1,608
5,816
136
A very long stream but surprisingly informative considering how high-up the people being interviewd are. Nice to see some rumors also being confirmed


  • Mentions decision why the chose the Infinity Cache and how they got thee clock-speed increases (architects from the CPU side were/are involved)
  • Talks about RT perf (developers had 2 years with only RTX cards, Herkelman believes things will improve with new titles)
  • Super Resolution - Gamedevs, Microsoft and Sony essentially begged them to not make a proprietary api, but something that could be used everywhere on all hardware (also Intel and Nvidia). Dev's really want to do a single code path for all platforms (and GPUs) with minimal per-game work. They are also want "really good high quality imaging", "really good scaling" and "no performance hit".
  • Why SAM isn't just a PCIe 4.0. bar switch (well it is, but a lot of firmware and BIOS work needed to be done for it to get the performance it does without regressions in other places, Nvidia will face similar issues)
  • Supply (they are shipping daily to partners for AIB cards) explains why they always release
And plenty of other stuff I missed.

IMO it's really iteresting to get some dibits straight from the horses mouth rather than via endless speculators.
They also mentioned professional parts to be announced shortly.

I did not want to get one of the 6000 series because the 5700XT really is very unstable for me on Linux. Also could be that the 5700XT I have is one of the early stepping chips maybe. My 5700XT is doing nothing, most of my work is using RX480, which, surprise, has ROCm support but the 5700XT not.
But Phoronix has a good review of RX6800XT on Linux w/ ROCm beating the cards from the competition soundly.
So I am a bit swayed. Waiting for the Pro parts to come out, hopefully something with HBM.
Thermals and noise are awesome, will be leaps and bounds above my 5700XT.
 
  • Like
Reactions: Tlh97

Mopetar

Diamond Member
Jan 31, 2011
7,848
6,015
136
Another possibility we haven't considered or discussed yet is what percentage of dies are being allocated to 6900XT cards as opposed to 6800/XT models. The yields on TSMC 7nm have been reported as being exceptionally good so there's a fair chance that most of the dies are coming back good. The 6800 having the full Infinity Cache and all of the memory controllers also lends some support to this.

Of course this doesn't mean that they don't get binned for a variety of other reasons such as hitting target clock speeds and economic factors, but in this case the only thing that matters right now is whether those full dies can hit the required clocks. Results with the 6800/XT shows that most chips can be pushed well beyond the reference limits, so it's not hard to imagine a majority of the full dies don't need to the weakest performing hardware disabled.

Normally you still might bin your chips simply because you don't expect you'll be able to sell that many of your flagship card at the price you want and lowering the price doesn't increase sales enough to make up for the loss of additional profit on each sale at the higher price. However, we're not operating under normal circumstances at the moment, so there's no reason to disable any shaders on any but the worst performing full dies.

There was the other rumor that AMD wasn't going to have AIB 6900XT cards at launch. I don't know if it's another case of those being delayed or by how much if they are, but there is an article from earlier this week on a few sites about an ASRock 6900XT so there are some AIB cards in the works for sure. However, if you do believe that AMD is being greedy or playing dirty tricks, the best move (the ultimate bait and switch) is to devote as many dies as possible to the 6900XT and ideally keep as many of those for yourself.

The actual market price for the 6800XT (and everything else in its class) is so inflated that you'll be able to sell any amount of 6900XT cards at launch even if the $1,000 MSRP normally wouldn't be that good of a value. AIB 6800XT cards on eBay are getting bids up to $1,050 so anyone willing to spend $900 or more would gladly get a 6900XT at $1,000.

Even people who would not normally buy one probably might simply due to perceived lack of supply and uncertainty of when they can buy a 6800XT/3080 at MSRP. There are so many 3090 backorders that you can probably pick up some of those people as well even if you somehow manage to exceed your normal customer base's demand. Given the frenzy of the past month the scalpers will certainly buy up every last card that doesn't go to anyone in the former groups.

All of this still makes perfect business sense even if AMD isn't malicious, but if you want to be one of those people that believe they're up to no good then this is also exactly what would allow them to profit the most. I don't know if the material costs for the reference 6900XT are substantially higher than the other Navi 21 cards, but assuming they aren't, every reference 6900XT sold is $350 more in AMD's pocket.

There's a recent Tom's article that puts AMDs wafers that aren't dedicated to Sony or Microsoft at about 10,000 per month. Most of this obviously goes to Zen 3 production since it's the most profitable, but they also need to dedicate some to supplying existing product lines that use TSCM 7nm assuming they don't have any inventory on hand that can be used to free up wafers. It's hard to say how much is being used for those other products, but it probably isn't zero.

So let's assume that AMD sets aside 500 wafers per month which is 5% of their supposed free wafers. With the size of Navi 21 you get right around 100 dies from each wafer. The good news is that with the suspected good yields, around 75% of those should be defect free. Assume that you only artificially bin about 10% of those to ensure performance. This leaves you with about 66 dies per wafer that could be sold as a 6900XT.

Normally you probably can't sell 66% of your dies as a flagship card, especially not at $350 over your value-oriented card. Under normal circumstances you might be happy to sell 25% of the die as the flagship model. I did some exceptionally casual analysis to arrive at this figure, but without a more thorough analysis I don't know how true it is today, but the products used to arrive at the result are similarly situated to the 6800XT and 6900XT in terms of price and performance difference. But remember, we're not in normal circumstances.

So if we assume that AMD is evil and is able to get 66 6900XT dies per wafer and that normally they couldn't expect to sell more than 25 of them at the $1,000 price they're asking, this means that 41 additional 6900XT per wafer will be sold due to some extremely diabolical planning and capitalizing on circumstances. Assuming 3 months of production at 500 WPM, that's a little over 60,000 extra cards and at an extra $350 each, another $21 million in the pockets of AMD.

Now there are a variety of ways in which this analysis could go wrong, but there are enough factors at play that it starts to turn into a Fermi problem. So if AMD were in fact utilizing this strategy (whether you think it's utterly dastardly or just good business sense) their extra profit is probably in that ballpark and not off by more than an order of magnitude in either direction and quite possibly constrained by a factor of two in either direction.
 

Mopetar

Diamond Member
Jan 31, 2011
7,848
6,015
136
This is a terrible assumption. Just because a title is old doesn't mean that it runs better than current titles. Go look at Kingdom Come Deliverance or Watch Dogs 2, as examples of DX11 titles.

So what? There are always modern titles that run better on one architecture or another due to sponsorship or optimizations being done for those games. Normally we tell people who really care about performance in one specific title to go with the company that gets the best performance in that particular title even if it's not the best choice overall.

I can give examples of even DX9 titles which bring modern GPUs to their knees but that'll be a digression.

So what? There aren't very many of those games being made so it's not worth optimizing for and it isn't as though Black Mesa looks particularly good (outside of comparing it to the older games in the franchise) so bumping up the resolution doesn't add a lot.

How do you know that those who play older games are insignificant enough to not matter?

How do you know they are? You're the person asserting that it's important, so the burden to prove it is rests on your shoulders. If I were to make the claim that unicorns are real, I don't get to demand that you have to prove they actually aren't when you say you don't believe me.

Wow you must love AMD GPUs so much that you're willing to gloss over its flaws.

No, I have no problem admitting their cards don't perform as well in older titles, which is obvious from benchmarks. I just don't think it's worth AMD's limited time to address that problem for all the reasons I previously stated. There are plenty of other reasons to prefer an Nvidia GPU such as RT performance, but if you don't think that's important either then it doesn't matter or shouldn't factor in to purchasing decisions.

There is a word for that behaviour, too bad it isn't allowed to be mentioned in these forums.

You can't say "sane" on these forums anymore?
 

leoneazzurro

Senior member
Jul 26, 2016
930
1,465
136
I can speak whatever I want. You seem to have your panties in a twist for no apparent reason, but you're fine in justifying an inferior product in particular aspects with piss-poor arguments.

Quite frankly this is a public forum and you cannot simply "speak as you want" but as the regulations of this forum say, you need to keep a respectful behavior. Thing that you haven't in the previous posts. Moreover, we can find similar issues to what you describe in past and present Nvidia products as well.
 
Last edited:

Makaveli

Diamond Member
Feb 8, 2002
4,723
1,059
136
I can speak whatever I want. You seem to have your panties in a twist for no apparent reason, but you're fine in justifying an inferior product in particular aspects with piss-poor arguments.

You don't seem to understand people will buy what they want regardless of what you post. Been reading all these post for the last two days and you just seem to keep going on like a broken record.

No one appointed you some kinda savior to bring the slaves to the promise land get over yourself already.
 

Hitman928

Diamond Member
Apr 15, 2012
5,321
8,005
136
Another justification for what AMD can/will/won't do. You could just accept that it is a flaw instead of rationalizing it.

AMD will not go back and spend resources trying to optimize for old games that were badly coded to begin with. It's not worth it. If this is important to you, buy an Nvidia card and move on and stop filling the thread with the same stuff over and over.
 
  • Like
Reactions: Tlh97 and Makaveli

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
20,846
3,190
126
after fighting for 3 hours in line and with store managers at microcenter..

20201218-163132.jpg


I have officially scored the two titans.
The 3090 i got earlier... the 6900XT today.

I am so done with GPU shopping.
 
Jul 27, 2020
16,339
10,351
106
Pretty disappointing launch from AMD as i expected, they show the rx 6900xt being on par with the rtx3090 on their own biased benchmarks and probably best case scenario, its safe to say that once reviewers got these cards it will be 10% slower, so they price them lower as always...Its a hard sell since they lack proper raytracing and dlss i would rather spend a little more and have those features...

Just remember when they showed Fury X and Radeon VII, in both cases according to AMD benchmarks they were faster than the GTX 980TI and RTX 2080 but in reality they were quite slower and they both failed miserably
AMD is not stupid hopefully. They are probably holding back some driver optimizations etc. in case nVidia is hiding an ace up their sleeve. They badly need to win against nVidia in this round of GPU wars.
 
  • Like
Reactions: lightmanek

dzoni2k2

Member
Sep 30, 2009
153
198
116
Pretty disappointing launch from AMD as i expected, they show the rx 6900xt being on par with the rtx3090 on their own biased benchmarks and probably best case scenario, its safe to say that once reviewers got these cards it will be 10% slower, so they price them lower as always...Its a hard sell since they lack proper raytracing and dlss i would rather spend a little more and have those features...

Just remember when they showed Fury X and Radeon VII, in both cases according to AMD benchmarks they were faster than the GTX 980TI and RTX 2080 but in reality they were quite slower and they both failed miserably

You were shitting on Zen3 and looked like a fool and now doing the same with RDNA2. If that's your way of coping then so be it.
 

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
7,409
2,443
146
Interesting, all very close actually. Makes the 6800XT definitely seem like the card to get for me @1440p
 
  • Like
Reactions: lightmanek

Borealis7

Platinum Member
Oct 19, 2006
2,914
205
106

I AM OFFICIALLY AMAZED! :eek:
 
  • Like
Reactions: GodisanAtheist

Mopetar

Diamond Member
Jan 31, 2011
7,848
6,015
136
That makes more sense. So it looks like a combination of AMD doing what corporations do (increasing profits) and crazy demand. Hopefully AMD follows through with what they've stated (reference will be produced thru early 2021 and AIB cards at close to MSRP incoming).

Unless AMD increased the price they're charging the AIBs then they (AMD) don't see any of the additional profit for sales above MSRP. Either the AIBs are the ones creating a premium product that they mark up above MSRP, the store who has the cards puts them on the shelf at above MSRP, or the scalper that has bought the card and resisted the card on eBay above MSRP are the ones who reap any additional profit. All of the increased cost is introduced after AMD has sold the dies to the AIB partners.

I don't think any of the rumors have said anything about AMD or Nvidia increasing the prices for AIBs, merely that what they're selling the chips and memory to them for leaves very little room for a profit on their end. Add in a shortage and it's little wonder why all of the AIBs want to sell premium models. Their own costs likely don't increase more than $50 for the extra materials for those premium products, but they can easily charge well over $100 more than the MSRP and consumers will gladly pay it because it's less than what the scalpers are asking.

Funnily enough, the only way AMD theoretically makes more of a profit in this scenario is if they keep more chips to make reference cards that they can sell for more than building the card themselves costs. So the very activity that they're being praised for is actually the one that nets them the most profit. Unless they charge the AIBs more, AMD doesn't realize any additional revenue regardless of who is responsible for the increased costs of MSRP.

If AMD really wanted to maximize the amount of product production while ensuring that most customers who wanted a card would be able to get one they should just switch to an auction system. That naturally let's the price reach what the market is willing to pay and removes incentive for scalping the products since there's no gap between the original sale price and what the actual market price is. Extra profit over what would normally have been charged can go towards purchasing wafers off of someone else who can't sell their own chips for as much. Of course no one wants to hear that because they think it's fundamentally unfair for some foolish reason.
 

Guru

Senior member
May 5, 2017
830
361
106
I disagree. Right now ray tracing is being done over the top, so it looks like crap frequently enough.

But on console they do not have power to over do it. I think we will see lightly raytraced games soon that do not performance hog and do not over do the brightness of the lighting.
If you are going to limit it even more, what is the point? I feel like for ray tracing to be viable it needs to be done completely, the full spectrum, which means shadows, reflections, ambient occlusion, etc... it needs to be done at a good quality(not a 720p low texture reflection in the window that are close) and it needs to have a solid over 60fps performance even on mid tier cards(where the majority of buyers are).

Considering that is not feasible now and won't be for at least 3-4 years, I consider it pointless, needless, a gimmick as of right now and in the short term future!