AMD 6000 reviews thread

Page 29 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BFG10K

Lifer
Aug 14, 2000
22,709
2,971
126


Wow, even the number 3 card has 16GB VRAM and is faster than the 2080TI. And the $1000 6900XT matches the $1500 3090 in performance.

The 3000 parts don't look so hot now.

Post reviews edit:
It's astonishing what AMD have managed to achieve with both the Ryzen 5000 and the Radeon 6000, especially given the absolutely minuscule R&D budget and resources compared to nVidia/Intel. Lisa Su is definitely the "Steve Jobs" of AMD with such a remarkable turnaround.

6900XT:
(It's absolutely amazing to see AMD compete with the 3090)


 
Last edited:

leoneazzurro

Senior member
Jul 26, 2016
927
1,452
136

It seems that AMD is stating they will improve the situation in January
 

Mopetar

Diamond Member
Jan 31, 2011
7,837
5,992
136
I think Mopetar's theory is interesting, but, we can see the yields:

100x 6800s delievered vs only 25x 6800XTs. That would seem to imply 6800s are yielding 4x the 6800XT.

This bears what I saw on Amazon on launch day, the 6800s stayed in stock a lot longer then the 6800XTs.


This does not discredit Mopetar's theory. It is possible defects are taking 6900XTs and converting them to 6800s, leaving the 6800xt as a hole in the line up.

While it's true that it doesn't really change the theory since it's a matter of selling every die possible as a 6900XT and makes no predictions about how binned dies are treated with respect to the ratio of one to another, it would be odd for AMD to try for that kind of profit maximizing strategy while having a 4:1 ratio of 6800/XT dies which have an even worse value per wafer unless they absolutely had to bin that way to hit performance targets.

Last year at this time TSMC reported their defect density on their 7nm node as .09 defects / cm^2. I also recall a more recent interview (I forget where it was at and who it was with) where someone from AMD said that more than 95% of their chiplets were coming back defect free, which puts the defect density at closer to .06 / cm^2. If that was long enough ago, it may have further improved. Of course just because the none of silicon is defective doesn't mean it can all hit the same clock speeds or operate within particular voltage or power usage thresholds so there's a need to bin and disable some of the cores to ensure the remaining ones can hit some performance metrics.

We can use those same figures we already have and run the calculations for Navi 21. We know the die size is 29 mm x 18.55 mm and if you plug that in to a die calculator you can get 96 - 100 Navi 21 dies per wafer. The most recent estimated defect density gives AMD and average of 70 - 73 fully functional dies and 26 - 27 dies with some defective silicon. If we use TSMC's numbers from last November it works out to 60 - 63 full dies and 36 - 37 defective dies.

Defects are essentially randomly distributed so they're as likely to hit any part of the wafer as any other. With a die shot of Navi 21 and an image editor we can figure out how much area of each die is taken up by the various components. From here we can start to estimate how many dies may fall into certain bins. Any dies with a defect in the front end are just scrap, and the same goes for anything else that doesn't have redundancy like the circuitry for the display connectors. Depending on how the L2 is connected (I'd have to look up if it's segmented off between the different shader engines or not) defects there could essentially brick a die. I know Nvidia can disable parts of the L2 cache for example.

One curious thing about these parts is that the 6800 doesn't have any disabled memory controllers or any disabled infinity cache. The memory controllers / connections aren't a huge part of the chip (about 64 mm^2 based on estimates I found for RNDA1 chips which should be using that same amount as they have the same 256-bit bus on the same 7nm node), so it's unlikely that you'd see too many defects there. However, the infinity cache is said to take up about 86 mm^2 or about 16% of the die space. It's possible that they just built in some redundancy to the infinity cache so a defect is far less likely to cause an actual problem or has a good chance of being worked around.

Together the memory controllers and infinity cache probably account for slightly more than 25% of the die area. Depending on redundancy in the infinity cache, somewhere between 3 - 9 chips per wafer should have a defect in these areas and can't be used for any of the Navi 21 parts we know about so far. I wouldn't be surprised if these end up in some OEM only part.

The bins for the 6800/XT can exist for a few reasons. First due to hardware defects, anything that takes out one of the four shader engines or any parts that are essential to its functioning (such as one of the ROPs) is automatically a 6800. A 6800XT would exist for other types of defects. The first is a defect in one of the WGP (think CUs in the old GCN architecture) blocks. Basically some of the shaders can't function. In this case 4 other WGPs get disabled.

The other reason for the bins is intentionally disabling some hardware because it can't hit required clock speeds or stay within power budgets and drags the rest of the chip down with it. A full die might get covered to a 6800XT if this problem occurs in the WGPs or a 6800 if it's in anything associated with one of the shader engines. Presumably a 6800XT with a defect in a WGP could get busted down to a 6800 if the shader engine has performance problems or another set of WGPs in that same shader engine has problems.

I'll need to do some digging or play around with an image editor if I can find an annotated die shot to figure out the relative area for the rest of the chip. That would help us determine how defects might naturally create the bins for the 6800/XT. The shaders should be the predominant feature on the die, so a defect should most likely hit a WGP. The layout of certain sections of the chip could influence the likelihood of poor performance, but AMD would have learned to minimize this from RDNA1.

Another piece of data that we want are OC results for 6800 and 6800XT cards. The clock speeds for the 6800 are 125/145 MHz lower for the base/boost, so we need to see where they end up with an OC to determine if the 6800 is being held back due to performance issues and an inability to hit the same speeds, or if they can overclock just as well and the lower clocks are because AMD wanted a particular result for the reference spec. With the data from where we expect defects, we could make better guesses at how many chips are being artificially binned due to inability to hit required clocks and estimate how many full dies would need to get binned for this reason.

I do want to note that there's one potential flaw with the link you posted. We have no idea if it's representative of other retailers in other countries. AMD doesn't just want to evenly allocate cards, but to adjust the mix based on what's going to sell best in each market. They have historical sales data so make these kind of adjustments. Subjective observation isn't normally the best and may just be confirmation bias as much as anything else. The 6800 seeming to stay in stock longer could also be a factor of less interest or buyers only grabbing them after they realize that a 6800 XT cannot be had.

That aside, any data is better than no data. We just want to be careful not to read too much into it (or on the flip side discard it because it doesn't fit our preconceived notions well enough) until additional data points can help confirm the initial point or give a better idea of how much of variability exists between the points and whether a reasonably estimate can be produced for the average value. Without having the other pieces filled in, I would say that it makes my original estimates seem too optimistic for the number of 6900XT dies. Unless those numbers don't represent the overall ratio or the analysis doesn't point to that kind of ratio being an expected outcome, it would point to more full dies being binned.
 
Last edited:

Guru

Senior member
May 5, 2017
830
361
106
I disagree. Right now ray tracing is being done over the top, so it looks like crap frequently enough.

But on console they do not have power to over do it. I think we will see lightly raytraced games soon that do not performance hog and do not over do the brightness of the lighting.
If you are going to limit it even more, what is the point? I feel like for ray tracing to be viable it needs to be done completely, the full spectrum, which means shadows, reflections, ambient occlusion, etc... it needs to be done at a good quality(not a 720p low texture reflection in the window that are close) and it needs to have a solid over 60fps performance even on mid tier cards(where the majority of buyers are).

Considering that is not feasible now and won't be for at least 3-4 years, I consider it pointless, needless, a gimmick as of right now and in the short term future!
 

Hitman928

Diamond Member
Apr 15, 2012
5,262
7,890
136

Has anyone else replicated these results, I haven't seen anyone else report this issue? Also, why is their 6800XT only matching a 2080Ti at best, even under DX12? I don't trust those numbers unless someone else is able to replicate them.

Edit: As a follow-up, here are TPU's numbers for Witcher 3 at 1080p. As far as I know, they use custom benchmark runs during gameplay, so no canned benchmarks. TPU doesn't exactly have an AMD friendly reputation either. In their review it seems like the 6800XT has no problem pushing out 216 fps and essentially matching a 3080. It' s most likely different game scenes, but if the 6800XT was CPU limited, it should have issues reaching these higher FPS as well. I don't know, it seems like there is something weird with the PurePC results as no one else has shown this issue but maybe one of the other reviewers can try to replicate it using the same scenes and hardware.

the-witcher-3-1920-1080.png
 
Last edited:

tamz_msc

Diamond Member
Jan 5, 2017
3,795
3,626
136
Has anyone else replicated these results, I haven't seen anyone else report this issue? Also, why is their 6800XT only matching a 2080Ti at best, even under DX12? I don't trust those numbers unless someone else is able to replicate them.
It happens in Black Mesa(DX9), Greedfall(DX11), Jedi Fallen Order(DX11), Witcher 3(DX11), Wolcen(DX11) - all tested by PCGH.de.

This is a recurring problem with AMD.
 

Hitman928

Diamond Member
Apr 15, 2012
5,262
7,890
136
It happens in Black Mesa(DX9), Greedfall(DX11), Jedi Fallen Order(DX11), Witcher 3(DX11), Wolcen(DX11) - all tested by PCGH.de.

This is a recurring problem with AMD.

Again, other review sites with Witcher 3 don't show the same thing, but it looks like some do, so what is really going on? Could it be down to something like Hairworks being turned on that is limiting AMD cards performance? I don't know, it would need further investigation. There are also multiple DX11 titles that perform better on AMD hardware at 1080p, so it's not a simple case of AMD driver overhead.

u6SUocp9ELRnnGK2Fki6kN-970-80.png.webp

AMD Radeon RX 6800 XT and RX 6800 Review | Tom's Hardware
 

tamz_msc

Diamond Member
Jan 5, 2017
3,795
3,626
136
Again, other review sites with Witcher 3 don't show the same thing, but it looks like some do, so what is really going on? Could it be down to something like Hairworks being turned on that is limiting AMD cards performance? I don't know, it would need further investigation. There are also multiple DX11 titles that perform better on AMD hardware at 1080p, so it's not a simple case of AMD driver overhead.

u6SUocp9ELRnnGK2Fki6kN-970-80.png.webp

AMD Radeon RX 6800 XT and RX 6800 Review | Tom's Hardware
I wouldn't give too much importance to AC:Valhalla as NVIDIA has some driver optimizations to do in that game, judging by these results. The point is that this performance issue happens in older games which no longer receive updates.

EDIT: Golem.de reports it happening in Hunt Showdown
18-hunt-showdown,-blanchett-graves-(1080p,-high-preset,-d3d11)-chart.png

EDIT2: TPU doesn't test CPU-intensive scenarios so maybe that's how they get normal results?
 
Last edited:

Hitman928

Diamond Member
Apr 15, 2012
5,262
7,890
136
I wouldn't give too much importance to AC:Valhalla as NVIDIA has some driver optimizations to do in that game, judging by these results. The point is that this performance issue happens in older games which no longer receive updates.

EDIT: Golem.de reports it happening in Hunt Showdown
18-hunt-showdown,-blanchett-graves-(1080p,-high-preset,-d3d11)-chart.png

EDIT2: TPU doesn't test CPU-intensive scenarios so maybe that's how they get normal results?

So the argument is not about DX11 overhead but lack of AMD optimizing drivers for new GPUs in old games?
 

tamz_msc

Diamond Member
Jan 5, 2017
3,795
3,626
136
Step up your game and find more that favor Nvidia! Maybe start dropping some showing the weakness of AMD's RT....I'm still looking for a 6800XT and maybe it'll drive down the prices!
This is a known issue with AMD GPUs and older APIs. Instead of trying to act smart you could simply say that you're not aware of it.
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,249
136
What about the crazy bait and switch pricing?

Dang silly AIB's doing the same thing they did with the RTX 3080's. MSRP $699 and charging up to $850 for them! I say we get out the pitchforks!
 
  • Like
Reactions: Leeea

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,249
136
This is a known issue with AMD GPUs and older APIs. Instead of trying to act smart you could simply say that you're not aware of it.

I could really care less about the performance in games I have no intention of playing in the 1st place. I could see being somewhat concerned if it effected me, but otherwise what's the point? It's kind of like bitching about driver issues when I haven't had any.

Are your examples games you want to play?
 
  • Like
Reactions: prtskg

tamz_msc

Diamond Member
Jan 5, 2017
3,795
3,626
136
I could really care less about the performance in games I have no intention of playing in the 1st place. I could see being somewhat concerned if it effected me, but otherwise what's the point? It's kind of like bitching about driver issues when I haven't had any.

Are your examples games you want to play?
You buy your GPU with the specific intention of playing only those games which perform better on it? I know of nobody who makes their purchasing decision that way.
 

DJinPrime

Member
Sep 9, 2020
87
89
51
You guys seems to be confusing what MSRP is. The MSRP of the AMD reference card is $650 (base model). That's not the MSRP for AIBs. AMD is not the manufacturer at this point, they are only the supply provider for the GPU and VRAM. The AIB can price their cards as they see fit, typically 0-10% more than base. What's typical is that they will release multiple level of cards, cheap ones that's close to the reference MSRP and expensive ones for premium. People are angry because with this launch, there are no cheap AIB models. NV release did. The $800 Red Devil is not a scalped price, that's the official msrp for that card. When AMD says there will be AIB models in 4-8 weeks at $650, those will be cheaper made model that wouldn't overclock as high. That's why the reviewers are pissed (rewatch the hardware unboxing video) because they're basically stuck in the bad position of giving you unrealistic performance/price information. Even then there's no promise that the AIBs will produce many of them (if what they say about margin is true and not just an excuse for them to increase their price), business wise, they should make and sell the model that give them the best return. For the AIBs, the GPU/VRAM is the same for a $800 model card or the $650 model. Which would you produce if the demand is high?
 
  • Like
Reactions: Leeea and tamz_msc

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,249
136
You guys seems to be confusing what MSRP is. The MSRP of the AMD reference card is $650 (base model). That's not the MSRP for AIBs. AMD is not the manufacturer at this point, they are only the supply provider for the GPU and VRAM. The AIB can price their cards as they see fit, typically 0-10% more than base. What's typical is that they will release multiple level of cards, cheap ones that's close to the reference MSRP and expensive ones for premium. People are angry because with this launch, there are no cheap AIB models. NV release did. The $800 Red Devil is not a scalped price, that's the official msrp for that card. When AMD says there will be AIB models in 4-8 weeks at $650, those will be cheaper made model that wouldn't overclock as high. That's why the reviewers are pissed (rewatch the hardware unboxing video) because they're basically stuck in the bad position of giving you unrealistic performance/price information. Even then there's no promise that the AIBs will produce many of them (if what they say about margin is true and not just an excuse for them to increase their price), business wise, they should make and sell the model that give them the best return. For the AIBs, the GPU/VRAM is the same for a $800 model card or the $650 model. Which would you produce if the demand is high?

Exactly....The AIB's are free to make their own price points in the end. If you were a AIB what would you release 1st? With the low supply and high demand and crazy scalper prices why would you launch the lower end pricing products 1st. Even the Red Devil's MSRP is cheaper then the scalpers! It actually looks quite impressive, but wouldn't fit in my case without some major reshuffling.
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,249
136
You buy your GPU with the specific intention of playing only those games which perform better on it? I know of nobody who makes their purchasing decision that way.

I still don't see the point why I should be concerned with the performance of a game that I wouldn't be playing anyways. It's probably best to view performance of the games one would play focusing on their resolution of choice. If card X get's exceptional performance in game Y and sucks in the game I want to play why would I buy it?
 

tamz_msc

Diamond Member
Jan 5, 2017
3,795
3,626
136
I still don't see the point why I should be concerned with the performance of a game that I wouldn't be playing anyways. It's probably best to view performance of the games one would play focusing on their resolution of choice. If card X get's exceptional performance in game Y and sucks in the game I want to play why would I buy it?
What happens when there is a game that you really want to play but whose performance you know sucks on your card? Right now that seems to be the issue with AMD cards, except games where this happens are relatively old so the new hardware can brute force its way into delivering high enough frames per second. But what happens when the issue is with a newer game?
 

GodisanAtheist

Diamond Member
Nov 16, 2006
6,808
7,163
136
No the argument is that AMD has driver overhead in older APIs which except for a few instances(mainly newer games) they do nothing to fix.

- I guess the question is how do we know it's driver overhead and not just AMD sucking in a particular title?

I would assume that we could best show a CPU side bottle neck by increasing the resolution and the AMD cards displaying strange behavior like maintaining or losing little performance as the burden shifts to the GPU.

Or are there CPU utilization charts which show that AMD is taxing the CPU differently than their NV equivalent?

At the end of the day,if the issue exists, I can see the reasoning. New hardware is powerful enough to push 100+ frames on any older game, it's not like old games are being reduced to slideshows, so why put very limited GPU driver team resources there?
 
  • Like
Reactions: Tlh97 and Leeea

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,249
136
What happens when there is a game that you really want to play but whose performance you know sucks on your card? Right now that seems to be the issue with AMD cards, except games where this happens are relatively old so the new hardware can brute force its way into delivering high enough frames per second. But what happens when the issue is with a newer game?

Most of the time new game performance is fixed with driver optimizations. I don't buy new games at launch at full retail pricing anyways. If there's a game that looks interesting to me then I'll look for reviews that show the performance.

I guess the point is there's no chance that a RX 6800XT is going to be slower then my RX 5700 in any game I'd wish to play. Ignoring a freak driver bug of course.
 
  • Like
Reactions: Tlh97 and Leeea

tamz_msc

Diamond Member
Jan 5, 2017
3,795
3,626
136
- I guess the question is how do we know it's driver overhead and not just AMD sucking in a particular title?

I would assume that we could best show a CPU side bottle neck by increasing the resolution and the AMD cards displaying strange behavior like maintaining or losing little performance as the burden shifts to the GPU.

Or are there CPU utilization charts which show that AMD is taxing the CPU differently than their NV equivalent?

At the end of the day,if the issue exists, I can see the reasoning. New hardware is powerful enough to push 100+ frames on any older game, it's not like old games are being reduced to slideshows, so why put very limited GPU driver team resources there?
PurePC had some CPU utilization charts:

27_test_ryzen_7_5800x_i_radeon_rx_6800_xt_w_miejscach_procesorowych_1.png


27_test_ryzen_7_5800x_i_radeon_rx_6800_xt_w_miejscach_procesorowych_0.png


They also did some testing a while back with an i5-10400F and R5 3600 paired with a RTX 2060 Super and RX 5700XT. They found similar issues in Witcher 3 and Kingdom Come, and to a lesser extent in Far Cry New Dawn.
What a joke. They didn't even specified settings which was used during those "tests".

I bet they used GameWorks effects at least for Watch Dogs 2 an Witcher 3.
The funny thing is that with Hairworks turned on in Witcher 3 the 6800XT took a slight lead. Besides Gameworks features are GPU-intensive, they should actually alleviate the problem.
 
Last edited: