Question Speculation: RDNA3 + CDNA2 Architectures Thread

Page 144 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

uzzi38

Platinum Member
Oct 16, 2019
2,625
5,895
146

PJVol

Senior member
May 25, 2020
533
446
106
BRB, gonna pop by my friend's place and "heavily OC" his 6800XT into a 4080.
Sorry but I don't get what exactly in my post is to be sneered at.
Anyway, I meant the performance of 7900XT in a specific benchmark compared to the 6800XT, and nothing more, which is enough though to suggest the performance scaling issues. Just in case, the comparision below is to my 6800xt with a mild OC (315W):

TSE : 13,687 | 10,634 | 28.7%
TSS : 23,897 | 22,266 | 7.3%
FSU : 16,915 | 14,586 | 15.9%
FSE : 33,497 | 29,541 | 13.4%

TSE.png TS.png FSU.png FSE.png
 
Last edited:

Heartbreaker

Diamond Member
Apr 3, 2006
4,226
5,228
136
I guess they're mounting brackets, but basically the extra bit to keep your GPU from sagging under the weight.

https://www.mnpctech.com/products/rtx-4060-4070-4080-4090-gpu-anti-sag-support-arm-bracket

I thought you meant something FE specific.

Yeah, I know such things exist, but starting with the 3000 FE cards, that outer frame tied everything together making them sag proof/resistant. I expect it would be the same for the 4000 FE cards:
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,355
2,848
106
From AGF (@XpeaGPU) on twitter.

N31 miss clock target by 700~900MHz. But problem is not only engineering. Current AMD bean counters at top management, ie focusing on extreme PPA for max profit, set impossible goals, thus too much cut corners, and extreme design complexity for the timeframe...

Misunderstanding. AMD management wanted perf crown AND beat NV on price (small dies + chiplet). Too aggressive target that engineers could not achieve without tons of complex tricks but it finally failed. Proof is in RDNA3 ISA, so much compromises everywhere to cut # of xtors

Problem in your analysis is that N31 was designed to compete with AD102 not AD103...

But initial AMD plan was to slightly undercut 4090 price with "competitive" N31 (at least in raster). From what I heard last year, top N31 was aimed to launch around $1.5k N31 should have been a milking cow for Green team...

That's 3200-3400Mhz or 28-36% higher clocks. If this is true, then AMD really *ucked up big time.
 
Last edited:

Kaluan

Senior member
Jan 4, 2022
500
1,071
96
That's 28-36% higher clocks. If this is true, then AMD really *ucked up big time.
With all the educated speculation in the last weeks, we already established that may have been the caae.

It's funny, my RDNA3 theorycrafting specs (didn't post them) had N31 top launch SKU at exactly ~800MHz higher clocks (boost) than the current ~2500MHz 🤣


Slightly off-topic: abstracting design woes, GDDR6W seems perfect for AMD's eventual Radeon PRO W7800/7900 series.
 

Kaluan

Senior member
Jan 4, 2022
500
1,071
96
I think the bug free version or RDNA3+ will be called as
RX 7950XTX FE
RX 7950XT FE
FE meaning fixed edition.:D
On similar but also serious note. I'm willing to bet if and when they eventually launch a fixed lineup, top SKU will be called 7970 something. Cashing in on HD 7970 fame.


Maybe it's just me, but ever since we got confirmation RDNA3+ is a thing, some of us should've gotten the hint that they're not happy with something about RDNA3 that they're willing to launch a mid-cycle redesign.
But as some pointed out in other places I follow, AMD didn't want to be THAT brand (*glances at Intel*) who can't keep schedules so they had to have something ready by end of 2022 to show the investors.
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,355
2,848
106
On similar but also serious note. I'm willing to bet if and when they eventually launch a fixed lineup, top SKU will be called 7970 something. Cashing in on HD 7970 fame.


Maybe it's just me, but ever since we got confirmation RDNA3+ is a thing, some of us should've gotten the hint that they're not happy with something about RDNA3 that they're willing to launch a mid-cycle redesign.
But as some pointed out in other places I follow, AMD didn't want to be THAT brand (*glances at Intel*) who can't keep schedules so they had to have something ready by end of 2022 to show the investors.
I always associated RDNA3+ with Strix Point IGP.
If they want to capitalize on HD 7970 then It should look like this:
RX 7970
RX 7950
I think they can get rid of XT or XTX, I just don't like It.
 
  • Like
Reactions: Tlh97 and Kaluan

eek2121

Platinum Member
Aug 2, 2005
2,930
4,026
136
I figured with the fmax set to 3.7ghz, we were likely looking at 3.2-3.4ghz.

If the rumors are true, it will be interesting to see if AMD does actually do a refresh of N31. It would be cheaper for them not to bother.

Still no leaked benchmarks except the 3DMark stuff. I am a bit surprised. AMD must’ve really limited review samples.
 

linkgoron

Platinum Member
Mar 9, 2005
2,293
814
136
I figured with the fmax set to 3.7ghz, we were likely looking at 3.2-3.4ghz.

If the rumors are true, it will be interesting to see if AMD does actually do a refresh of N31. It would be cheaper for them not to bother.

Still no leaked benchmarks except the 3DMark stuff. I am a bit surprised. AMD must’ve really limited review samples.
With the current 2 year cadence in GPUs, a real refresh a year later wouldn't be so bad.
 

biostud

Lifer
Feb 27, 2003
18,241
4,755
136
I figured with the fmax set to 3.7ghz, we were likely looking at 3.2-3.4ghz.

If the rumors are true, it will be interesting to see if AMD does actually do a refresh of N31. It would be cheaper for them not to bother.

Still no leaked benchmarks except the 3DMark stuff. I am a bit surprised. AMD must’ve really limited review samples.
The question is how close N32 will be in performance. If they, as the rumors has it, will launch N32 with performance close to 7900XT, but much cheaper to produce, then it really doesn't make sense to keep producing the more expensive card, unless the can respin it for higher performance and price.
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,355
2,848
106
Here are some test results from TPU. First Alleged RX 7900-series Benchmarks Leaked
Original source is @Vitamin4Dz
1. God of War
2. Cyberpunk 2077
3. Assassin's Creed Valhalla
4. Watch Dogs: Legion
5. Red Dead Redemption 2
6. Doom Eternal
7. Horizon Zero Dawn

Dh7Jon3G7iHxy4WP.jpg
U30PPmwuAnwAWOVd.jpg
DlTIobx2xPuXHe0W.jpg


2rz3wUlf8ETzxHJm.jpg
IBXrwsPLCqz367bT.jpg
dkV2kmVb9xitD3Hi.jpg


MIj4loz2bWFQwhAl.jpg


gdansk: For that price yes, but originally It should have cost more and performed better.
 
Last edited:

GodisanAtheist

Diamond Member
Nov 16, 2006
6,783
7,117
136
With the current 2 year cadence in GPUs, a real refresh a year later wouldn't be so bad.

- AMD was on 7nm for ages. Nvidia basically tried to find the cheapest wafer they could get away with during that time (TSMC 12nm/Samsung 8nm).

I suspect we'll be on 5 nm and variations of for at least 2 arches. If AMD can get another tier of performance out of the parts with a respin, they'll do it.
 
  • Like
Reactions: Tlh97 and Joe NYC

Panino Manino

Senior member
Jan 28, 2017
820
1,022
136
It's really a shame.
Why is AMD like this, at least with GPUs? Every odd generation is a tragedy. Again it seemed that they had a change to compete by the top but again is out of the game, and Nvidia keeps increasing their distance in the top stop.

And we're bad to the sad "Sonic Cycle of GPUs".
 

DisEnchantment

Golden Member
Mar 3, 2017
1,601
5,780
136
Going through mesa, seems GFX11 is really bug ridden, worse than RDNA1.

  • ALU Stall requires a lot of additional instructions to be inserted to signal a switch to another wave.

  • Export conflict, likely coming from the OREO
Currently all GFX11 have it.​

  • GFX11 has excessive hazards which need a lot of wait states and idling/nop to allow the data dependency to get resolved.

  • Yikes so many dependency issues, which the HW cannot manage.
A couple of additional bugs from LLVM

user-sgpr-init16-bug
mad-intra-fwd-bug
Code:
def FeatureUserSGPRInit16Bug : SubtargetFeature<"user-sgpr-init16-bug",
  "UserSGPRInit16Bug",
  "true",
  "Bug requiring at least 16 user+system SGPRs to be enabled"
>;

Code:
def FeatureMADIntraFwdBug : SubtargetFeature<"mad-intra-fwd-bug",
  "HasMADIntraFwdBug",
  "true",
  "MAD_U64/I64 intra instruction forwarding bug"
>;


Code:
def FeatureISAVersion11_Common : FeatureSet<
  [FeatureGFX11,
   FeatureLDSBankCount32,
   FeatureDLInsts,
   FeatureDot5Insts,
   FeatureDot7Insts,
   FeatureDot8Insts,
   FeatureNSAEncoding,
   FeatureNSAMaxSize5,
   FeatureWavefrontSize32,
   FeatureShaderCyclesRegister,
   FeatureArchitectedFlatScratch,
   FeatureAtomicFaddRtnInsts,
   FeatureAtomicFaddNoRtnInsts,
   FeatureFlatAtomicFaddF32Inst,
   FeatureImageInsts,
   FeaturePackedTID,
   FeatureVcmpxPermlaneHazard,
   FeatureBackOffBarrier,
   FeatureMADIntraFwdBug]>;

def FeatureISAVersion11_0_0 : FeatureSet<
  !listconcat(FeatureISAVersion11_Common.Features,
    [FeatureGFX11FullVGPRs,
     FeatureUserSGPRInit16Bug])>;

Just to recollect RDNA1 carried these hazards and lots of bugs
Code:
def FeatureGroup {
  // Bugs present on gfx10.1.
  list<SubtargetFeature> GFX10_1_Bugs = [
    FeatureVcmpxPermlaneHazard,
    FeatureVMEMtoScalarWriteHazard,
    FeatureSMEMtoVectorWriteHazard,
    FeatureInstFwdPrefetchBug,
    FeatureVcmpxExecWARHazard,
    FeatureLdsBranchVmemWARHazard,
    FeatureNSAtoVMEMBug,
    FeatureNSAClauseBug,
    FeatureOffset3fBug,
    FeatureFlatSegmentOffsetBug,
    FeatureNegativeUnalignedScratchOffsetBug
   ];
}

None of these are in RDNA2

But actually, yeah not as bad as RDNA1
 

tajoh111

Senior member
Mar 28, 2005
298
312
136
Nvidia is simply a strong competitor in comparison to Intel.

In addition. AMD being on 7nm when Nvidia was on Samsungs 8nm garbage created an illusion of AMD catching up but since maxwell, AMD has generally been behind.

Add in the high development of finfet chips, AMD generally CPU first spending for R and D and we get the typically AMD being behind in the GPU space.

What kind of magnifies this disappointment however is the hype train that AMD's guerilla marketing team creates for every launch. Fake leakers that mysteriously disappear after launches(the most recent is greymon). The use and then subsequent destruction of youtubers/influencers who use leaks for their platform. Eg. s semiaccurate, adoredtv, MLID, redgamingtech. After each of these hype trains crashes, typically these influencers have faded into obscurity. The next person awaiting that fate is likely MLID. AMD used to do it more transparently with people on forums and AMD representatives, but after similar hype crashes, AMD reps and subsequently own reputation AMD took a hit.
 
Mar 11, 2004
23,073
5,554
146
It's really a shame.
Why is AMD like this, at least with GPUs? Every odd generation is a tragedy. Again it seemed that they had a change to compete by the top but again is out of the game, and Nvidia keeps increasing their distance in the top stop.

And we're bad to the sad "Sonic Cycle of GPUs".

Same thing happens to Nvidia and people don't bat an eye. While people still demonize AMD for well basically every issue large or small, but compare the Vega/Volta situation. Vega had issues but it still was a lot more competitive than people seem to recall, meanwhile they forget that Volta literally never had a consumer card. There was 1 prosumer Titan card and that was it. And it was priced so insane that I was baffled people were surprised about NVidia's pricing when they'd already gone so bonkers. On top of that, for all the constant griping about AMD hype train, there were Nvidia fanboys claiming Volta was going to offer over 100% performance (legit they were more like 150-250%) improvement (which it ended up being I think ~50%). Not a single peep about any of that. Or how Nvidia has been doing pretty bad in perf/w (which suddenly doesn't matter the moment NVidia is bad at it, but if they aren't then suddenly its the thing that matters the most).

And Ampere was hyped to hell, especially after the SM count stuff came out. While it certainly wasn't a failure, if AMD had a similar outcome it'd have been considered a complete failure and we'd never hear the end of it. Heck we still see people complaining about the first 7970 era and how that shows AMD has always been inferior, and that was a decade ago. (Although it is odd seeing people pining for the fixed mega clocking version of the 7000 series as some awesome similarity.)

Nvidia is simply a strong competitor in comparison to Intel.

In addition. AMD being on 7nm when Nvidia was on Samsungs 8nm garbage created an illusion of AMD catching up but since maxwell, AMD has generally been behind.

Add in the high development of finfet chips, AMD generally CPU first spending for R and D and we get the typically AMD being behind in the GPU space.

What kind of magnifies this disappointment however is the hype train that AMD's guerilla marketing team creates for every launch. Fake leakers that mysteriously disappear after launches(the most recent is greymon). The use and then subsequent destruction of youtubers/influencers who use leaks for their platform. Eg. s semiaccurate, adoredtv, MLID, redgamingtech. After each of these hype trains crashes, typically these influencers have faded into obscurity. The next person awaiting that fate is likely MLID. AMD used to do it more transparently with people on forums and AMD representatives, but after similar hype crashes, AMD reps and subsequently own reputation AMD took a hit.

Wait, you're blaming AMD for people being morons/clowns and propping up "leakers"? Seriously, the "leakers" have become just behind crypto-currency evangelists as far as being FOS that everyone knows but still keeps propping up for some reason. But you people are like addicts, you admit its stupid to put any stock into it but you keep reporting and discussing every single rumor, often while griping about how much of it is obviously just nonsense and seeking to profit off of people doing the behavior you're complaining about.

Further, why does the Nvidia hype train get a free pass? Its been just as bad (go back and look at the Nvidia fanboys on here with regards to Volta; we saw similar hype over the SM count of Ampere and pretty sure there was some ). Especially considering the past history of behavior, I'd take a look at Nvidia before I'd blame AMD for that stuff. Notice how we get rumors suggesting doom for Nvidia (with regards to power use for instance), and then the product comes out and its not that bad. That's a well known "softening the blow" tactic. Meanwhile there's a mountain of absolute nonsense about AMD and then perpetual "well they didn't meet 4GHz so its all broke and they're awful I'll never forgive them for this!!!" behavior.

How do you explain Lovelace if Ampere was simply due to garbage Samsung process? This time Nvidia has the superior process and yet, even a supposedly broken AMD chip is very likely still to be better in perf/W. That remains to be seen where things end up, but that's very likely to still hold being true.

Its not that. Its because Nvidia literally spent billions for this mindshare. Actually Intel has been as competitive in CPUs as NVidia in GPUs so that argument is straight nonsense. It is straight up how Nvidia intentionally worked to create the mindset that is prevalent. There's a reason, despite it being junk (and their history of failing in GPU and giving up quickly), that people are hating AMD but pining for Intel in the GPU space.

The one thing I'll give you is that the complexity of modern chips is really tough. AMD is doing something extra complex as well. But that doesn't matter. Also, despite Nvidia having the superior process and simpler chip design, well AMD is a failure. Who cares if we don't know where things are because the AMD stuff isn't out yet, why let that stop the nonsense?