Discussion Radeon 6500XT and 6400

Page 17 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

GodisanAtheist

Diamond Member
Nov 16, 2006
6,783
7,115
136
Just getting a thread up for this. Navi 24 will ride... Q1 2022... ish.


I fugure the 6500XT lands ~5500XT territory for ~$200.
 
  • Like
Reactions: Tlh97

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,691
136
Saw on Reddit PCMR's are getting the ASRock phantom gaming model for $199 at MicroCenter. At that price, it is untouchable for a 12th gen budget gaming build.
Buying this card is very much not PCMR, but hey. Also Newegg had the Pulse for $199 earlier today.
It would actually be nice if AMD is able to keep pushing out just enough supply of cards sold at MSRP so that scalpers end up eating a bit of a loss for once.

They may not be able to keep them in stock at all times, but if there's a weekly drop then the more people will hold out and avoid eBay.

It seems like they're able to keep 6500XT's in stock @ pretty much MSRP here in Denmark. Even if you have to get in line to actually buy one, and you can't be picky about the specific model. I think that's pretty reasonable all-round.

If everyone can avoid fleeceBay, so much the better.

There seemed be ZERO forward thinking, or consideration for other use cases, so there may be more real HW limitations on this as well.

My own personal gripe with the 6400 and 6500XT isn't even the performance. It's the crippled media block. Would have been decent if that had been fully featured. Why AMD? Just why?
 

jpiniero

Lifer
Oct 1, 2010
14,584
5,206
136
My own personal gripe with the 6400 and 6500XT isn't even the performance. It's the crippled media block. Would have been decent if that had been fully featured. Why AMD? Just why?

My speculation earlier was the patent costs more than the transisitors. Remember this is meant to be a competitor to the mx* line, and only got this desktop release because of the GPU price surge caused by mining.
 

Mopetar

Diamond Member
Jan 31, 2011
7,831
5,980
136
My own personal gripe with the 6400 and 6500XT isn't even the performance. It's the crippled media block. Would have been decent if that had been fully featured. Why AMD? Just why?

As other posters have speculated, this apparently started life as a product that was intended to be used in laptops and paired with an APU that already had those capabilities on its own media block and so they were cut in order to save transistors.
 
  • Like
Reactions: Tlh97 and Leeea

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,691
136
My speculation earlier was the patent costs more than the transistors.

I tought AV1 was supposed to be free and open to all. Without licence and patent fees?

As other posters have speculated, this apparently started life as a product that was intended to be used in laptops and paired with an APU that already had those capabilities on its own media block and so they were cut in order to save transistors.

Or just re-used an older decode block. The IP is supposed to be modular and portable after all. Still seems an odd place to save transistors. I mean, even if the APU has the decode block, you gain a lot more flexibility* for what must be an insignificant amount (area) of transistors at 6nm. Loosing the HW encoder I can understand, but this?

If everything else failed, they could have gone with a hybrid decode solution like on Polaris for VP9. Not pretty or particularly efficient, but it works.

*Like desktop cards.
 

blckgrffn

Diamond Member
May 1, 2003
9,123
3,056
136
www.teamjuchems.com
I can only point out the same thing I did for @AtenRa: if someone asks you for build advice today, will you also mention waiting until the 3050 launch? Because if you do mention they may want to wait, then you're just as intellectually conflicted as the people criticizing the 6500XT, as some of them would still buy it given no other choice. A product is good or bad no matter whether someone is metaphorically pointing a gun to your head in order to make you buy it.

Let me make the razor even sharper. What would you recommend to Joe today:
  1. 6500XT @ $199 /w 4GB VRAM
  2. 6500XT @ $249 /w 8GB VRAM
Why aren't we getting the option to choose? Because of the miners?! Heck, put both products on the market, let's see the 8GB variant price blow up because of crypto and AMD getting grateful nods from gamers happy they have a 4GB version just for them. Or could it just be that the review comparison would be problematic for the 4GB variant while the $249 MSRP of 8GB SKU would look bad in upcoming reviews? (even if MSRP isn't representative for real market pricing anymore)

Here's a mainstream reviewer expressing her... "feeling" for the immediate future:

View attachment 56402

They don't want to sell more of these.

It's likely that any other product they sell has a better margin. Container space doesn't know about CPUs vs GPUs. This has got to be in the running for the lowest value density product they sell. (In terms of dollars in logistics displacement).

I'll stick to my observation that this was supposed to be an OEM level part, only for new (ie, PCIe4) builds that are likely overwhelmingly Intel based (because of that's how it is) because OEMs wanted something, anything to sell and this was conceived in the darkest days of the silicon (substrate, etc.) shortages. I realize they said laptops but this seems in reality a better fit for any OEM desktop.

Dell will make this a $100-$150 stepping stone from the 6400 and a bunch of people will click the button because the don't want the slowest one, they want the +1. That's just a educated hunch.

Due to the ongoing s-show that is current global economies, mining and logistics we all get to enjoy this.

An 8GB version would show likely minimal improvements in a PCIe 4 system, likely super similar to an 8GB RX580 vs 4GB RX580 and looking back there are pages and pages and pages of discussion how wasteful it was to spend money on the extra ram. I know that's a discussion that goes back to like 2015 and 8GB 290's but... yeah.

I buy every 1060 3GB and GTX 970 I can lay my hands on sub $200 and they are used, dusty and sometimes smokey. I'd love to have new cards with warranties to include in builds, just because.

¯\_(ツ)_/¯

Maybe we'll get a little window of availability now with crypto looking tough, I see lots of NIB cards on my local CL for... ~50% MSRP markup which is a bit of a reprieve.

I still think the 6600/3060 is a starting spot for AAA/FPS "serious" gamers going forward, but man you should have seen how excited this 13 year old was three days ago to buy a $600 computer from me with an 8600K and a GTX 970. Kid was shaking a little bit and this was all his money from reffing soccer. Everything he wanted to play with his older brother was suddenly on the table.

1080p and medium details is still plenty to "just play".

Sorry for the ramble here.
 
Last edited:

TESKATLIPOKA

Platinum Member
May 1, 2020
2,355
2,848
106
CapFrameX compared RX 6500XT to RX 6800XT at different clocks. Tweet
RX 6800 XT: ~40% more fps (1600MHz to 2500MHz)
RX 6500 XT: ~18% more fps (1600MHz to 2500MHz)

The RX 6500 XT is strictly bandwidth limited.
FJyINGlXEAUvfTR


The scores are not what I expected.
If It's really BW limited, I don't understand why increasing clock-speed from 1600 MHz to 1900 MHz results in only 5% higher FPS, when the clock increased by 18.5%.
At that clock, bandwidth should be more than enough, and I would expect a near linear scaling.
 

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
CapFrameX compared RX 6500XT to RX 6800XT at different clocks. Tweet

FJyINGlXEAUvfTR


The scores are not what I expected.
If It's really BW limited, I don't understand why increasing clock-speed from 1600 MHz to 1900 MHz results in only 5% higher FPS, when the clock increased by 18.5%.
At that clock, bandwidth should be more than enough, and I would expect a near linear scaling.

From the above it seams that 6500M at only 50W power will be significantly faster vs RTX 3050 Mobile (75W TDP).
 

Mopetar

Diamond Member
Jan 31, 2011
7,831
5,980
136
CapFrameX compared RX 6500XT to RX 6800XT at different clocks. Tweet

FJyINGlXEAUvfTR


The scores are not what I expected.
If It's really BW limited, I don't understand why increasing clock-speed from 1600 MHz to 1900 MHz results in only 5% higher FPS, when the clock increased by 18.5%.
At that clock, bandwidth should be more than enough, and I would expect a near linear scaling.

Doom is one of those titles that does need more memory than a lot of other titles. If it is bottlenecked by memory bandwidth (or anything else) then we should expect that clock speed increases don't scale linearly.

The 6800 XT gets a ~40% increase for a ~56% increase in clock speeds. The 6500 XT only gets an ~18% increase for that same clock speed increase. We'd probably want to look at scaling across something like that 6700 XT which has enough VRAM to avoid a bottleneck, but isn't so powerful that it might bottleneck somewhere else in order to get a better picture of how that 6500 XT should be expected to perform if it weren't being constrained.

Is the infinity cache also based off of the core clock? If it were, it may even be that which is most responsible for any uplift on the 6500 XT as opposed to the cores themselves running faster.
 

blckgrffn

Diamond Member
May 1, 2003
9,123
3,056
136
www.teamjuchems.com
Doom is one of those titles that does need more memory than a lot of other titles. If it is bottlenecked by memory bandwidth (or anything else) then we should expect that clock speed increases don't scale linearly.

The 6800 XT gets a ~40% increase for a ~56% increase in clock speeds. The 6500 XT only gets an ~18% increase for that same clock speed increase. We'd probably want to look at scaling across something like that 6700 XT which has enough VRAM to avoid a bottleneck, but isn't so powerful that it might bottleneck somewhere else in order to get a better picture of how that 6500 XT should be expected to perform if it weren't being constrained.

Is the infinity cache also based off of the core clock? If it were, it may even be that which is most responsible for any uplift on the 6500 XT as opposed to the cores themselves running faster.

Right, honestly it would be way more interesting (to me) to see differences in Unity or Unreal engine games (not sure what the other major ones might be now) that are more indicative of a much larger percentage of games.

I get Doom puts a lot of pressure on it, but that is 2 games?
 
  • Like
Reactions: Tlh97 and ryan20fun

Leeea

Diamond Member
Apr 3, 2020
3,617
5,363
136
There will never be a new $150 gaming video card ever again. Or probably under $200. Discuss.

Meet the rx6400.

. . .
My own personal gripe with the 6400 and 6500XT isn't even the performance. It's the crippled media block. Would have been decent if that had been fully featured. Why AMD? Just why?
Any old potato can software decode av1 1080p, which is likely the best monitor this will ever be paired with.

The newer CPUs can av1 decode at 4k, although 8k is still of a bit of leap for most CPUs.

It seems unlikely people will be pairing 6500xt's with 8k monitors.

The decode thing seems irrelevant for likely use cases.


An 8GB version would show likely minimal improvements in a PCIe 4 system, likely super similar to an 8GB RX580 vs 4GB RX580 and looking back there are pages and pages and pages of discussion how wasteful it was to spend money on the extra ram. I know that's a discussion that goes back to like 2015 and 8GB 290's but... yeah.
In hindsight, it turns out the 8GB version was way better. At least in resale value if nothing else.

I used both the 4GB rx580 and the 8GB rx580. Both were good cards, but the 8GB was and is flat out better. I no longer daily drive a rx580 8GB, but it did age way better then the 4GB version.

I would argue the rx580 8GB is still superior to any 4GB card currently in production by AMD or Nvidia.
 
Last edited:

Mopetar

Diamond Member
Jan 31, 2011
7,831
5,980
136
I get Doom puts a lot of pressure on it, but that is 2 games?

It's not bad to throw something like Doom in the mix, but you'd also want something that is t going to stress the memory bandwidth at all just to be able to determine to what extent that is an issue. I'm pretty sure there are some eSport titles that aren't very memory intensive that would be useful to look at.


In hindsight, it turns out the 8GB version was way better. At least in resale value if nothing else.

People always have the same argument about more or less memory and for a midrange card no one has ever been bitten by getting the card that has more.
 

blckgrffn

Diamond Member
May 1, 2003
9,123
3,056
136
www.teamjuchems.com
In hindsight, it turns out the 8GB version was way better. At least in resale value if nothing else.

In the "normal course" of things there was almost no resale premium.

Ha, I sold a 8GB RX470 for $385 locally about one year ago that I paid $80 for on eBay about 8 months prior. That's not normal, and at the time I bought it, the extra four GB of ram was worth about a $10 premium. And even then, it was like, really why? They choked in games before the 8 GB could shine. They were done, put a fork in 'em. I did it because it was so cheap, why not? It was for my son's first gaming rig. Great investment in retrospect.

I agree they are better now, but the consensus then was why pay the extra $50 when you have to run an idtech game at Ubermensch settings (Wolfenstein?) to even tell the difference because generally speaking it was too slow. Back then, $50 and a year or two bought you like, a real upgrade! Back when $50 meant something! :D

And you should have sold an 8GB RX 580 a few months ago and by now you would have paid for a 6600 or a 3060 from NewEgg by now, for sure. ;)
 

Leeea

Diamond Member
Apr 3, 2020
3,617
5,363
136
And you should have sold an 8GB RX 580 a few months ago and by now you would have paid for a 6600 or a 3060 from NewEgg by now, for sure. ;)
Sell it?

I sent it to the mines!

I have not sold a video card in years. I think when mining finally crashes there will be a bunch of people like me who are not true miners, but have a few old video cards that were mining because easy money. When we dump our stock the GPU crisis will likely come to an end.
 

Asterox

Golden Member
May 15, 2012
1,026
1,775
136
Meet the rx6400.


Any old potato can software decode av1 1080p, which is likely the best monitor this will ever be paired with.
The newer CPUs can av1 decode at 4k, although 8k is still of a bit of leap for most CPUs.

It seems unlikely people will be pairing 6500xt's with 8k monitors.

The decode thing seems irrelevant for likely use cases.

You've got to ask yourself one ouestion, how old(how much Cores and Threads) is that CPU potato. :grinning:

Here is my example with Zen2/R5 4650G, but configured as 2/4 CPU.

AV1 1080p/60
2022-01-24_131154.jpg

AV1 1440p/60
2022-01-24_132105.jpg

AV1 4K/30, quite a surprise even for me.As we can see, yes it can play 4K/30 AV1 Youtube video without problems.
2022-01-24_133507.jpg

AV1 4K/60, 2/4 CPU as expected only sound+frozen video

AV1 hardware decoder is always welcome.If we use a modern or capable CPU(minimum 4/8 CPU), realistically there is no need to cry about the lack of AV1 decoders.8K blah, who cares about 8K and that's because 99% of people today or in the future still won't use more than 4K resolution monitor or TV.


 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,691
136
AV1 4K/30, quite a surprise even for me.As we can see, yes it can play 4K/30 AV1 Youtube video without problems.

Well, my Athlon 200GE can just about manage 4K30 software decoding. With the occasional dropped frame. 1440p/60 and below aren't any problem.

AV1 4K/60, 2/4 CPU as expected only sound+frozen video

That's my experience too.
 

blckgrffn

Diamond Member
May 1, 2003
9,123
3,056
136
www.teamjuchems.com
Sell it?

I sent it to the mines!

I have not sold a video card in years. I think when mining finally crashes there will be a bunch of people like me who are not true miners, but have a few old video cards that were mining because easy money. When we dump our stock the GPU crisis will likely come to an end.

I didn't say what I did with my dirty fiat. ;)
 
  • Like
Reactions: Leeea

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
CapFrameX compared RX 6500XT to RX 6800XT at different clocks. Tweet

FJyINGlXEAUvfTR


The scores are not what I expected.
If It's really BW limited, I don't understand why increasing clock-speed from 1600 MHz to 1900 MHz results in only 5% higher FPS, when the clock increased by 18.5%.
At that clock, bandwidth should be more than enough, and I would expect a near linear scaling.

Core clock speeds in RDNA2 also directly affect Infinity Cache bandwidth. The 6800XT has 128MB of IfC, resulting in a VERY high hit rate. The higher you crank up the GPU clock speed, the higher the average memory bandwidth goes at nearly the same rate. The 6500XT has 16MB of IfC, roughly 12% of the 6800's capacity, and has a significantly lower hit rate. In addition, because of how AMD chose to cut the chip down, it is technically slower than the VRAM on the card with respect to total bandwidth, it just has a much lower response time. Increasing the GPU clocks by the same amount as the 6800xt doesn't even get it to the same bandwidth as the VRAM that's already installed. The performance scaling makes sense in this context. It's a very cut down part and is up against serious limitations.
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,355
2,848
106
Core clock speeds in RDNA2 also directly affect Infinity Cache bandwidth. The 6800XT has 128MB of IfC, resulting in a VERY high hit rate. The higher you crank up the GPU clock speed, the higher the average memory bandwidth goes at nearly the same rate. The 6500XT has 16MB of IfC, roughly 12% of the 6800's capacity, and has a significantly lower hit rate. In addition, because of how AMD chose to cut the chip down, it is technically slower than the VRAM on the card with respect to total bandwidth, it just has a much lower response time. Increasing the GPU clocks by the same amount as the 6800xt doesn't even get it to the same bandwidth as the VRAM that's already installed. The performance scaling makes sense in this context. It's a very cut down part and is up against serious limitations.
Info for others:
N21 has 128MB IF clocked at 1.94 GHz and with 16*64B channels adds up to 1987 GB/s and ~58%(4K) hit rate makes It effectively 1152 GB/s.
N24 has 16 MB IF and If It's clocked at 1.94GHz and with 2*64B adds up to 248 GB/s, but hit rate of ~37%(Full HD) makes It effectively only 92GB/s.
64bit 18GHz GDDR6 offers 144 GB/s bandwidth.

I don't know at what speed the IF in N24 works, or If the core clocks directly affect IF clocks.
Shouldn't It be a separate clock domain?
 
Last edited:

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
I haven't seen it referenced anywhere that the IfC has a separate clock domain, though, AMD quotes a static IfC bandwidth of ~87GB/sec (its in their combined card bandwidth numbers). It looks like they chop the IfC bandwidth in half with each size reduction.

Making it a separate clock domain doesn't make sense for a low cost part, so it stands to reason that it scales with core clock. Unfortunately, it has a poor hit rate, so, it really doesn't help much. We'll see when the APUs hit the market without it. It'll also be interesting to see once memory overclocking benchmarks come out.

I suspect that NAVI24 is too cut down to see much benefit.
 

Mopetar

Diamond Member
Jan 31, 2011
7,831
5,980
136
I don't know for sure. It makes a lot more sense to have the clock speed tied to the core speed just for sake of simplicity and synchronization. GPU clock speeds are often a lot slower than CPUs so the cache shouldn't have a problem keeping up.
 
  • Like
Reactions: Tlh97 and Leeea

eek2121

Platinum Member
Aug 2, 2005
2,930
4,025
136
That's not laudable though, it's because it's the most overpriced by the manufacturer to start with. MSRP is about double what it should be. This is at best a $100 card, overpriced by AMD to $200 MSRP, and $280 to $300+ by some OEMs. So in essence it already has 2X to 3X scalping built in by AMD and OEMs.

Add to that, this is a deeply flawed card, that isn't very desirable to start with, so very little room left for addition profiteering/scalping. This isn't something to cheer about.

RTX 3050 OTOH looks like a significantly better/less crippled card in every way. More base performance, more VRAM, full PCIe bus, Full Media Encoders and Decoders, full display output panel. IOW better in every way and not crippled at all. So of course demand is going to be much higher and thus there will be room for additional profiteering/scalping.

Good luck being able to get one at MSRP.

This is not inflation though, this is scalping and profiteering.

GPU production faces the same actual cost of inputs as other electronics, yet other electronics are selling for typically only 0 - 10% higher prices. 100%-300% higher GPU prices is just taking advantage of "desperate" customers.

Though I agree, that since everyone is in on the profiteering now, prices will be very slow to fall, even if mining disappeared as an excuse.

Give me a break. AMD is not "scalping" or "profiteering". Their margins on this card are pretty slim. Keep in mind AMD does NOT see anywhere close to $200 for this card. They see about half that at best. The chip alone, excluding everything else, costs them around $20 per card. VRM, GDDR6, other components, packaging (for first party), shipping, logistics, etc. All of that has to squeeze into that $100. These chips are rejects, and they are actually making most of the margin in mobile. The desktop cards just help them increase volume which allows them to sell everything at a slightly lower price, and it helps with the GPU market issues. That is also why these cards are PCIE 4x4. That is all they needed on mobile and moving to a higher configuration would require design changes that would make the desktop release pointless.
 
  • Like
Reactions: Tlh97

Shivansps

Diamond Member
Sep 11, 2013
3,851
1,518
136
My own personal gripe with the 6400 and 6500XT isn't even the performance. It's the crippled media block. Would have been decent if that had been fully featured. Why AMD? Just why?
If you want a media GPU i would recomend to wait for the Intel GPUs, the DG1 is an awesome media GPU, i mean that thing uses about the same power than the J4105 IGP for media decode, if i could add it to my J4105 pc i would buy one. But i cant. But maybe a 96EU DG2? I still kinda hoping to one day see a bios for the DG1s, the rom chip its there.

Any old potato can software decode av1 1080p, which is likely the best monitor this will ever be paired with.

Well, my J4105 media pc can software decode Youtube AV1 up to 1080P resolution. At 100% cpu usage, but whiout drops, a 2C/4T Zen based athlon is better than that. But under that logic just remove the media block completely.
 

ultimatebob

Lifer
Jul 1, 2001
25,135
2,445
126
It looks like the eBay scalper prices for these 6500 XT's are currently hovering around $350, which is insane for a card of this class. For that price, I'd rather have a used GeForce 1070, which is also hovering around the same price range.

That said, I see one on Newegg in stock at the moment for $269, with a limit of 20 per customer. Gee, way to help with the shortages there, Newegg :)
 
Last edited: