Question Speculation: RDNA3 + CDNA2 Architectures Thread

Page 127 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

uzzi38

Platinum Member
Oct 16, 2019
2,647
6,074
146

Leeea

Diamond Member
Apr 3, 2020
3,629
5,369
136
The problem with 3 ghz speculation is power consumption.

It would go through the roof to get to 3 ghz.


The partner cards are going to be interesting.
 

Joe NYC

Golden Member
Jun 26, 2021
1,991
2,392
106
How much higher? Base clocks between N21 and N22 were 500MHz apart and I doubt that was a fluke. For N32 to be 500MHz faster and hit 3GHz boost would not confirm anything, in my opinion.

If N32 hit 3 GHz at 355 Watts, maybe that would not positively confirm anything. But if it hit 3 GHz at 250 Watts or below, it would indicate some kind of bottleneck was fixed.
 

dr1337

Senior member
May 25, 2020
346
594
106
If the slide with the "designed for 3Ghz" is not a hoax, then it is a reasonable conclusion that something did not end up as expected.
Take another look at the slide; In the same block where they say "Architected to exceed 3Ghz" they also specifically say "61.6 Boost TFLOPs", which is exactly n31s tflops at boost clock. And lower down in the slide they specifically mention the 2.5ghz clocks of n31. Why would there be a reasonable conclusion that something didn't end up as expected when the slide literally specifies the sub 3ghz clocks and fp perf? Really just sounds like it was a poorly placed line in a draft slide referencing the architecture as a whole and not just the AMD reference card.
 

uzzi38

Platinum Member
Oct 16, 2019
2,647
6,074
146
How can anyone believe rumors about how AMD is fixing the problem when all the leakers were blindsided by AMD even having the problem.

Quoting Skyjuice:

"Unfortunately, our A0 performance profiling results were not indicative of production samples, so we have refrained from detailing our findings publicly."


The clock speeds the final silicon was able to hit was only known outside of AMD very late. We're talking in the last month kind of late.

Nobody wanted to leak it is all.
 
Last edited:

dr1337

Senior member
May 25, 2020
346
594
106
Quoting Skyjuice:

"Unfortunately, our A0 performance profiling results were not indicative of production samples, so we have refrained from detailing our findings publicly."


The clock speeds the final silicon was able to hit was only known outside of AMD very late. We're talking in the last month kind of late.

Nobody wanted to leak it is all.
Weren't they one of the originals claiming high clocks? Videocardz had posted some of the slide pictures before the presentation even started so for angstromnomics to have gotten eyes on the press package the day before and then leaving a little random sourceless quip at the end of his blog that says he was actually wrong is evidence of literally nothing at all.

How can you say nobody wanted to leak it when they literally didn't get this information until nov 3? I really doubt a random little blog trying to make a name for itself would sit on leaks like that.
 
  • Like
Reactions: Leeea

Hans Gruber

Platinum Member
Dec 23, 2006
2,140
1,089
136
Is it possible AMD has some kind of throttling firmware built in to keep the clock speeds down? Capping the speed so that newer versions of the same chip can clock to 3ghz+ without firmware limiting clock speeds.
 
  • Like
Reactions: Leeea

PJVol

Senior member
May 25, 2020
534
447
106
Is it possible AMD has some kind of throttling firmware built in to keep the clock speeds down?

Amd CPU and GPU power management IS the multitude of throttlers since 28nm chips.
For example, rdna3 has multiple CPO accumulators spread throughout a chip, limiting power or clocks for a certain IP part (i.e. Critical Path) when its replica path oscillator's "count" exceed fused minimal or average "count".

But this only matters if there's no hard Fmax limit, as in the 6800 and 6800xt bios for the reference boards (2600 and 2800 mhz resp.). So, if lets say, hard Fmax is 3.7ghz, then the clocks will increase by itself with a less leaky or reduced total Cac silicon.
 
Last edited:
  • Like
Reactions: Tlh97 and Leeea

jpiniero

Lifer
Oct 1, 2010
14,642
5,270
136
Is it possible AMD has some kind of throttling firmware built in to keep the clock speeds down? Capping the speed so that newer versions of the same chip can clock to 3ghz+ without firmware limiting clock speeds.

I had wondered about doing something like an binned XTX Ultra. I think they would have announced that though if that was coming.
 
  • Like
Reactions: Tlh97 and Leeea

uzzi38

Platinum Member
Oct 16, 2019
2,647
6,074
146
Weren't they one of the originals claiming high clocks? Videocardz had posted some of the slide pictures before the presentation even started so for angstromnomics to have gotten eyes on the press package the day before and then leaving a little random sourceless quip at the end of his blog that says he was actually wrong is evidence of literally nothing at all.

How can you say nobody wanted to leak it when they literally didn't get this information until nov 3? I really doubt a random little blog trying to make a name for itself would sit on leaks like that.

Check through their site. They haven't updated any of the articles after the fact. Try and find one where they said anything about extremely high clocks.

Although I can't provide proof because the Discord messages have been deleted, of the people I know of Skyjuice is the second person that knew of Navi31 sporting 96CUs - preceeding public rumours by several months. The first being over a year and a half ago.

And FYI, I've known Skyjuice for over two years. He's only started posting anything publicly as of late. So the idea that there's no way he's sitting on leaks he could be using to make a name for himself is fundamentally flawed. If that's all he cared about he could have been doing this months, if not years ago.
 

tajoh111

Senior member
Mar 28, 2005
298
312
136
The problem with 3 ghz speculation is power consumption.

It would go through the roof to get to 3 ghz.


The partner cards are going to be interesting.

Yes. If decoupling the shaderclock from the front end saves 25% power which in turn is 90 watts(2.5ghz down to 2.3ghz), this implies that a game clock at 2.5ghz consumes 450 watts. How much does pushing about 700mhz vs stock on the shader affect power consumption? Would 600 watts be acceptable from AIB and for AMD fans?

Would partners want to cool 500+ watts on what is a 300mm die? Would that even possible on air?

A 600mm2 monolithic die is way easier to cool than having a die half the size with the same heat or more.
 
Last edited:

PJVol

Senior member
May 25, 2020
534
447
106
this implies that a game clock at 2.5ghz consumes 450 watts
Does it really? It was said "up to" 25%, and the actual savings obviously depend on workload activity. For the avereage gaming ~10-15% imho looks more realistic, that gives ~ 8-10% more performance.
25% could be the best case scenario like stress test.
 
Last edited:

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,655
136
This forum won't even accept that the Intel ARC cards have a hardware bug. You really think they will accept that a company that has been making GPUs for 30 years would release a hardware bug?
It's not something we have run into often. Back with the OG Phenom launch there was certainly a bug that kept the clock speeds capped. They had to respin and relaunched as the Phenom II very shortly after. But that was because AMD was barely competitive without the projected clocks.

Intel, you have the rumors but the proof is in the pudding. No way Intel dedicated so much resource and power for a GPU that performs at the level they do. But its not so much a bug but a design oversight (as reported) and is architecture problem. Which is why reports are that Battle Mage won't fix the issue, its design is basically capped and its replacement would be finished before they would finish with a Battle Mage redesign.

This seems something a respin might correct and for a design that would be in market for 1.5-2 years if there is something they could do to get higher clocks I don't see why AMD wouldn't respin it for a refresh. Considering the power design, the main GPU die size, I don't think AMD ever designed it to be a 4090 opponent. Which makes it seem like they choose the clock for Yields and that above that there would be wild variance. Also AIB's specially the non-strict AMD only AIB's, never put much effort into their AMD designs compared to NVidia (and Intel when talking boards). It's very possible they could be doing something not great with cooling, power phases, and power management. If AMD had a bunch of near launch testing units and cooler designs for more than 2 8 pin connectors, I think there might be some foundation in there being an issue. But the coolers and reference board design look like 350 was always the target mark. AMD didn't drop 12v connector, it wasn't designed for it, they didn't drop a third connector. It very much looks like this was always its design mark. This looks to be there HD 4870 moment. Not the top, but close enough, cheap enough, cheaper to manufacturer, and also not some oversized monstrosity.
 

Kaluan

Senior member
Jan 4, 2022
500
1,071
96
Adding in (edit: or taking out) another 8pin connector to a reference design is neither rocket science nor a big redesign endeavor. It doesn't say anything.

And the whole point about not meeting the clock targets due to this or that alleged hardware bug was that N31 silicon could achieve a much better v/f curve, not that it was supposed to scale to 600W to achieve those. lol

The thread is pure pop specu-tainment at this point. Worse off than before the official announcement 😅
 
Last edited:

Leeea

Diamond Member
Apr 3, 2020
3,629
5,369
136
It's not something we have run into often. Back with the OG Phenom launch there was certainly a bug that kept the clock speeds capped. They had to respin and relaunched as the Phenom II very shortly after. But that was because AMD was barely competitive without the projected clocks.
Yea, thinking about waiting to see if there is a re-spin before making any purchasing decisions.

It is a great card, its performance / price is unequaled.

I just thought it would be something more.

Also curious if any AIB will add the third 8 pin and we can see what it can do pulling down nvidia wattage. I suspect that is not going to be great though, AMD likely already has this thing at the point of diminishing returns. A 3 ghz re-spin could change that.

Probably just hopium though.
 

amenx

Diamond Member
Dec 17, 2004
3,950
2,188
136
The only 'weakness' in the cards is the RT performance. Raster not far off the 4090 (but looking better than the 4080 which is what its competing against). A 'respin' is not going to do much for RT, so dont really see the point in it.
 
  • Like
Reactions: Tlh97 and Leeea

jpiniero

Lifer
Oct 1, 2010
14,642
5,270
136
The only 'weakness' in the cards is the RT performance. Raster not far off the 4090 (but looking better than the 4080 which is what its competing against). A 'respin' is not going to do much for RT, so dont really see the point in it.

If you figure it's going to be 2 years to RDNA 4, they are probably going to want to do a refresh in the meantime. Just to spruce things up. A dinky 5% uplift might not cut it if nVidia gets a decent performance gain with the extra bandwidth from GDDR7.
 
  • Like
Reactions: Tlh97 and Leeea

Kaluan

Senior member
Jan 4, 2022
500
1,071
96
Also curious if any AIB will add the third 8 pin and we can see what it can do pulling down nvidia wattage.

"mid-range" ASUS 7900 XT designs, such as TUF OC have 3x 8pin, nevermind "premium" 7900 XTXs. So it's pretty clear 3x 8pin will be a 'feature' on most custom 7900 XT/XTX.


The only 'weakness' in the cards is the RT performance. Raster not far off the 4090 (but looking better than the 4080 which is what its competing against). A 'respin' is not going to do much for RT, so dont really see the point in it.

Beating or going toe to toe with 4090Ti in raster has a much better mindshare ring to it than "5-10% slower than 4090" doesn't it?

And up to 25% more RT output than the current expectations of 7900 XTX would be nothing to scoff at. Even if it still wouldn't be as fast as 4070-4080 in most hybrid/RT enabled titles.

IDK, just saying. I'm more curious about 'N32' and what's happening there.

AMD's performance claims aside, the new "2nd gen RA" remains a mystery to me, hope someone like Lacuza does a video or article covering it.
 

Kaluan

Senior member
Jan 4, 2022
500
1,071
96
Did some ignoramus napkin math about 7900 XT vs XTX TBPs, and it makes the XTX look good efficiency-wise (not necessarily performance wise)

1.142x more SPs to "light up", 1.05x higher boost (and even higher base/game) clocks and 1 more MCD die + the 4GB GDDR6 chips that go with it. At 'just' a 1.183x increased power.

Even considering better binning on the XTX chip and disregarding rumors of current silicon not being up to (the original) par and 'fixed it could have a much better v/f curve'. It's hard to imagine 7900XTX and current silicon as-is is the peak of RDNA3 design capabilities. Power scaling look good.

Unless there really is a hardware limitation on clocks (which won't bode well for N32 and maybe N33 designs either).

Ugh, still one month away.
 

biostud

Lifer
Feb 27, 2003
18,252
4,771
136
Did some ignoramus napkin math about 7900 XT vs XTX TBPs, and it makes the XTX look good efficiency-wise (not necessarily performance wise)

1.142x more SPs to "light up", 1.05x higher boost (and even higher base/game) clocks and 1 more MCD die + the 4GB GDDR6 chips that go with it. At 'just' a 1.183x increased power.

Even considering better binning on the XTX chip and disregarding rumors of current silicon not being up to (the original) par and 'fixed it could have a much better v/f curve'. It's hard to imagine 7900XTX and current silicon as-is is the peak of RDNA3 design capabilities. Power scaling look good.

Unless there really is a hardware limitation on clocks (which won't bode well for N32 and maybe N33 designs either).

Ugh, still one month away.
As far as I remember it was pretty much the same with the 6800XT vs the regular 6800.
 

Kaluan

Senior member
Jan 4, 2022
500
1,071
96
As far as I remember it was pretty much the same with the 6800XT vs the regular 6800.
Oh I thinks so, even RX 6800 to RX 6900/6950XT seemed to scale scale pretty well in official TBP numbers. 33% more SP, 6-10% higher boost clocks and around x1.32 faster for 34% more power.
But they weren't chiplet based and all of the N21 designs powered the same amount of GDDR VRAM and buses. N31 and N32 depart from this.

IDK, just some food for thought.
 
  • Like
Reactions: Tlh97 and Leeea

amenx

Diamond Member
Dec 17, 2004
3,950
2,188
136
PowerColor teases pic of RX 7900 XTX HellHound

POWERCOLOR-RADEON-RX-7900-HELLHOUND-PIC-1200x640.jpg


2 x 8-pin power connectors for 375w. Only 20w over ref.

 

DisEnchantment

Golden Member
Mar 3, 2017
1,615
5,861
136
Propaganda Mastermind Master Jebaiter Scott Herkelman mass gaslighting or RDNA3 clocks not achieved?

1668423732735.jpeg

Pay attention to this ("post silicon measurement", "at launch")
1668423764242.png

1668424519391.png

This slide shows +18% Game clock vs N21. (i.e. 2.48 Ghz on N31 vs 2.1GHz on N21. But officially listed as 2.3GHz on N31)

Sauce
 
Last edited:

TESKATLIPOKA

Platinum Member
May 1, 2020
2,364
2,855
136
Propaganda Mastermind Master Jebaiter Scott Herkelman mass gaslighting or RDNA3 clocks not achieved?

View attachment 71123

Pay attention to this ("post silicon measurement", "at launch")
View attachment 71124

View attachment 71125

This slide shows +18% Game clock vs N21. (i.e. 2.48 Ghz on N31 vs 2.1GHz on N21. But officially listed as 2.3GHz on N31)
From where did you get these slides? Are they legit?

The frontend in N31 is 2.5GHz, but shaders are only at 2.3GHz.
Officially, the game clock is 2.3GHz and considering they mention in that last slide +18% for gaming clock, I think they were talking about gaming clock and not boost clock in the slide below about shader and frontend frequency.
Gd6Y6vy8KK7AJbwW.jpg

Yes, If they were talking about +15% for boost then against RX 6950XT at least the frontend should be clocked at ~2655MHz.
Boost clock for shaders is likely ~2.5GHz an then 61.6 TFLOPs is correct.

If I applied +30% to the existing RDNA2 models, then It would look like this.
RX 6950 XT: 2310 MHz -> 3003 MHz
RX 6750 XT: 2600 MHz -> 3380 MHz
RX 6650 XT: 2635 MHz -> 3426 MHz
RX 6500 XT: 2815 MHz -> 3660 MHz
N31 is nowhere near 3GHz even If I looked only at the frontend frequency.
 
Last edited:

Bigos

Member
Jun 2, 2019
131
292
136
The +30% frequency slide seems to indicate that happens when you give up any efficiency gain. So if you want +54% performance/watt increase you have to back down on the frequency.