• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."
  • Community Question: What makes a good motherboard?

Question Speculation: RDNA2 + CDNA Architectures thread

Page 90 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

mohit9206

Golden Member
Jul 2, 2013
1,341
471
136
My prediction for the two top cards with todays info.

RX6900XT with 16GB HBM2E ram / ~RTX3090 raster performance at $1299
RX6800XT with 16GB HBM2E ram / ~RTX3080 raster performance at $699
Can you give me some details on the launch, availability and embargo dates for reviews on the AMD RX 5300?
 

CakeMonster

Senior member
Nov 22, 2012
993
79
91
NV has left such a gap in memory and price between the 3080 and 3090 that if AMD can compete they will dictate the price. If not, expect NV to price the 3080 20GB models in a way that will make you look the 3090 again or angrily buy the 3080 10GB that you could have gotten earlier if you just had known. I really hope AMD messes up the pricing while still staying profitable.
 

Glo.

Diamond Member
Apr 25, 2015
4,649
3,276
136
Few things to add about MacOS power tables.

IF those power tables are for Apple GPUs, those are the specs within which the Apple GPUs have to operate. The TDP is for WHOLE GPU BOARDS, that are connected to Macs, or rather, Radeon Pro versions.

Otherwise, its just what DisEnchantment explained. Another power table that came from AMD.

If we will find BIOSes for those GPUs in MacOS, then we are at home. From those BIOSes we can get everything related to clock gating, voltages, power gates, etc.

For example, Radeon Pro D700 have had 129W TDP, for whole board, with 1.025v, clocked at 850 MHz. It was possible to exctract the exact details from MacOS back in the day. I don't know if its still possible.
 

Glo.

Diamond Member
Apr 25, 2015
4,649
3,276
136
Yeah. Always the same. 80CU 2.2ghz seems to be somewhat realistic at according power consumption around 300W. But now we suddenly are at 2.7 ghz with 250W. People here are setting the launch up to be a failure just due to the over hyped expectations if the top sku lands at 3080 level with just lower power use.
Who, exactly is talking about 2.7 GHz at 250W for 80 CU GPU?

Or is it just your own projection?
 

Asterox

Senior member
May 15, 2012
432
587
136
Yes very legit, all models have same 1500mhz GPU clock very legit. :grinning: I am not drunk, so a will not play same guessing clickbate game.

We have PS5 as example, and this is Gaming Console or not a wood stove.

RDNA2, 36CU up to 2.2ghz

RDNA2 Desktop GPU, as rumored 40CU, but now only 1500mhz GPU
 

BlitzWulf

Member
Mar 3, 2016
165
73
101
Yes very legit, all models have same 1500mhz GPU clock very legit. :grinning: I am not drunk, so a will not play same guessing clickbate game.

We have PS5 as example, and this is Gaming Console or not a wood stove.

RDNA2, 36CU up to 2.2ghz

RDNA2 Desktop GPU, as rumored 40CU, but now only 1500mhz GPU


Base clock not boost, clocks might not be fully finalized yet and 1500 may be a placeholder ,or maybe it is fake we'll see!
 
Last edited:

TESKATLIPOKA

Senior member
May 1, 2020
295
315
96
These specs at Newegg are not the actual specs, from where would they even get it?

I was thinking how there is no mention of anything higher than 256bt GDDR6, only HBM2.
My speculation is that 60-64CU N21 will have only 256bit GDDR6.
40CU N22 at 2.5Ghz with 192bit 16Ghz GDDR6 would have 12.8TFLOPs and 384GB/s.
60-64CU N21 at 2.1Ghz with 256bit 16Ghz GDDR6 would have 16.13-17.2TFLOPs and 512GB/s.
The increase in TFlops and bandwidth is comparable so this is highly possible in my opinion.

Now what about the full 80CU chip? I think N21 could have both GDDR6 and HBM2 inside and combined It probably won't use up more die space than 384bit GDDR6, so there is no problem. They will save some W which can be used for higher GPU clockspeed and although HBM2 will be costlier than GDDR6, they can increase the price If It will perform higher than RTX3080. Even If they had minimal margins on It It's still a great advertisement for AMD that they are back.

With this in mind the amount of Vram could be
N23 - 8GB 128bit GDDR6
N22 - 12GB 192bit GDDR6
Cut down N21 - 16GB 256bit GDDR6
Full N21 - 16GB HBM2 -> 2 stack of 8GB 2.4Ghz 2048bit HBM2E would mean 614GB/s or 20% more than 256bit 16Ghz GDDR6. If It's still not enough then there is still HBM Flashbolt with 33% higher bandwidth(3.2Ghz) than HBM Aquabolt, but that will be even costlier. At worst I am wrong and It will have 384bit GDDR6.
 
Last edited:

Asterox

Senior member
May 15, 2012
432
587
136
Base clock not boost, clocks might not be fully finalized yet and 1500 may be a placeholder ,or maybe it is fake we'll see!
Placeholder for what, if you go by AMD slide it is very easy to put solid clickbait blah.So if you go by AMD slide, where is increase clock speed if stock GPU clock is only 1500mhz? :grinning: So GPU base is lower, ok then probably IPC is also lower.In short RDNA2 is garbage, buy Nvidia and no problem.

RX 5700XT, base GPU clock 1600mhz


RX 67000XT, base clock 1500mhz

2020-09-27_151333.jpg
 

Mopetar

Diamond Member
Jan 31, 2011
5,236
1,850
136
If we look back through this thread and plot out the possible clock speeds people have been suggesting we've probably gone from something like 1.8 GHz > 2 GHz > 2.2 GHz > 2.5 GHz > 3 GHz so if we're to extrapolate I think the hype train will probably get us all the way to 4 GHz given the current acceleration and time remaining until launch.

We may even be reaching a point where this thing is going fast enough to lift off and turn into some kind of hype plane.
 

blckgrffn

Diamond Member
May 1, 2003
7,344
628
126
www.teamjuchems.com
Were these confirmed for the Mac Pro only? The iMac and iMac Pro don't get mobile GPUs.
How manny Apple SKUs are going to get third party GPU going forward after the great Apple Silicon Revolution? Probably just the Mac Pro and maybe the iMac Pro?

They’ve already put a 10 core Intel CPU in that, I feel like when they have an ARM CPU using was less power they’ll have a decent thermal envelope to work with.

No idea on the mobile front what is happening.🤷‍♂️
 

Glo.

Diamond Member
Apr 25, 2015
4,649
3,276
136
If we look back through this thread and plot out the possible clock speeds people have been suggesting we've probably gone from something like 1.8 GHz > 2 GHz > 2.2 GHz > 2.5 GHz > 3 GHz so if we're to extrapolate I think the hype train will probably get us all the way to 4 GHz given the current acceleration and time remaining until launch.

We may even be reaching a point where this thing is going fast enough to lift off and turn into some kind of hype plane.
5GHz confirmed! Its AMD, after all. They were not able to deliver 5GHz dream on Zen 2's but for sure will deliver GPUs with those clock speeds!
 

Glo.

Diamond Member
Apr 25, 2015
4,649
3,276
136
How manny Apple SKUs are going to get third party GPU going forward after the great Apple Silicon Revolution? Probably just the Mac Pro and maybe the iMac Pro?

They’ve already put a 10 core Intel CPU in that, I feel like when they have an ARM CPU using was less power they’ll have a decent thermal envelope to work with.

No idea on the mobile front what is happening.🤷‍♂️
Usually in Pro lineups Apple has 2 or 3 SKUs based on one die.
 

Glo.

Diamond Member
Apr 25, 2015
4,649
3,276
136
Few things to add about MacOS power tables.

IF those power tables are for Apple GPUs, those are the specs within which the Apple GPUs have to operate. The TDP is for WHOLE GPU BOARDS, that are connected to Macs, or rather, Radeon Pro versions.

Otherwise, its just what DisEnchantment explained. Another power table that came from AMD.

If we will find BIOSes for those GPUs in MacOS, then we are at home. From those BIOSes we can get everything related to clock gating, voltages, power gates, etc.

For example, Radeon Pro D700 have had 129W TDP, for whole board, with 1.025v, clocked at 850 MHz. It was possible to exctract the exact details from MacOS back in the day. I don't know if its still possible.
Ok, I now need to correct this.

The data about Radeon Pro D700 is pulled from the GPU BIOS. Its the same situation as is for Radeon Pro 5700 and 5700 XT in Apple MacOS.

The data pulled from the OS is the typical spec, for specific die that system has to work with.

This is the reason why, you can put for example Radeon RX 5600 XT into MacPro and it will work out of the box, but with the typical Radeon RX 5600 XT clock speeds(for example1650 MHz game clock and 1715 MHz Boost clock).

You can't extract GPU BIOS without having the GPUs on your hands. The data, for Radeon Pro D700 and for Radeon Pro 5700 and 5700 XT that is on the TechPowerUp database is straight from the GPUs. Also Radeon Pro 5700 and 5700 XT 130W TDP is for WHOLE BOARD. GPU+Memory subsystem.

So those power targets that we have seen leaked may not be final, and Apple versions will still be more efficient, and clocked lower, however, IMO, the details about Navi 21A and 21B are so highly specific that I believe this might actually be something very close to what will be possible to be pulled from the GPU BIOSes. At least, those might be close to final Navi21 based Apple SKUs, with those, exact specs.
 

nurturedhate

Golden Member
Aug 27, 2011
1,669
499
136
That would be a bad idea....Playing on a successful offering would be best. Maybe something like FrameCrusher would be best.

2.5GHz is probably pushing it. Wait and see what happens in the end. It is what it is in the end.
I apologize, I figured the /s was sufficiently represented by 5ghz clocks and naming something after bulldozer.
 

ASK THE COMMUNITY