• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Question Speculation: RDNA2 + CDNA Architectures thread

Page 50 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

kurosaki

Senior member
Feb 7, 2019
257
247
76
Two different use cases.

I keep wanting to comment on RDNA2 but my guess is just as good as anyone else's. Someone on Crappit posted a header image on Computex back when it was canceled in May. End of September. I'm guessing we'll find out what we want to find out in the next 2-3 weeks?

I'm really happy I scored a platinum 850 watt PSU a few weeks ago. Sleep well, my precious.
Yes, I think it's a tactic, to wait with their hand for as long as possible. Nvidias pricing is starting to sink in, performance for the 30xx -series has been set. Wait just up until a day before release of the 30xx-series and there just release all info at once. Just lay back and see Nvidia trying to damage control during the night before the physical release.
 

A///

Senior member
Feb 24, 2017
829
578
106
Yes, I think it's a tactic, to wait with their hand for as long as possible. Nvidias pricing is starting to sink in, performance for the 30xx -series has been set. Wait just up until a day before release of the 30xx-series and there just release all info at once. Just lay back and see Nvidia trying to damage control during the night before the physical release.
Suggesting people who've already bought it and the order is going through? It's devious indeed.
 

Glo.

Diamond Member
Apr 25, 2015
4,652
3,281
136
Last year, AMD released info about their GPUs and then, Nvidia released RTX Super GPUs with pricing to counter which resulted in famous "Jebaited" move by AMD.

What makes anyone here believe that waiting for launch and actual sales start and countering Nvidias offer on performance and pricing is a bad move? What if Nvidia prepared similar move, to counter AMD?

That is why AMD is holding their hand close to their chest.
 

DeathReborn

Platinum Member
Oct 11, 2005
2,250
186
106
I am expecting RDNA2 availability about 6 weeks after info starts coming out, there is no "source" for this, just my gut feeling. That gives AMD until the end of September to get some info out for my purchasing needs.
 

kurosaki

Senior member
Feb 7, 2019
257
247
76
Last year, AMD released info about their GPUs and then, Nvidia released RTX Super GPUs with pricing to counter which resulted in famous "Jebaited" move by AMD.

What makes anyone here believe that waiting for launch and actual sales start and countering Nvidias offer on performance and pricing is a bad move? What if Nvidia prepared similar move, to counter AMD?

That is why AMD is holding their hand close to their chest.
Yes, by waiting, Nvidia is kind of forced to keep pricing "as is" I guess. Imagine selling 10 000 cards and then have to lower the price by 100usd to compete. That would be har to explain to the early buyers. I guess these thing happen from time to time, when one company calculates what kind of margin would be sufficient, and the other company just take as much as they believe they possibly can.
 

A///

Senior member
Feb 24, 2017
829
578
106
Preferably before orders are starting to take place. :D
Ok. But if it's after the initial sales period as supplies are finite, then NVidia has to go through the process of refunding the difference if they're forced to lower prices. Or they cancel all the orders and have people reorder.

In either case, consumers will be pissed off.
 
  • Like
Reactions: Tlh97 and kurosaki

Konan

Senior member
Jul 28, 2017
360
291
106
Probably an announcement in October (along with the new CPUs) and availability in November. Remember Nov 11 is singles day (Double 11) the largest shopping day in the world. Would be nice for them to get something out before then IMO.
 
  • Like
Reactions: spursindonesia

Shivansps

Diamond Member
Sep 11, 2013
3,126
793
136
Last year, AMD released info about their GPUs and then, Nvidia released RTX Super GPUs with pricing to counter which resulted in famous "Jebaited" move by AMD.

What makes anyone here believe that waiting for launch and actual sales start and countering Nvidias offer on performance and pricing is a bad move? What if Nvidia prepared similar move, to counter AMD?

That is why AMD is holding their hand close to their chest.
Last year AMD was coming A YEAR LATE to try to compite with Nvidia, it is very clear that Nvidia was ready to counter, and AMD pricing allowed this. As coming one year late were not bad enoght they were too greedy as well.
That is the worse scenario as possible.
 

TESKATLIPOKA

Senior member
May 1, 2020
298
315
96
Yeah.

Navi 23 will be just behind RTX 3070, and Navi 22 will be just behind RTX 3080. That is correct.

Guys, use your logic. Look at RX 5700 XT and its performance per ALU/CU compared to Turing, add redesigned caches, which increases internal bandwidth massively, add 25% core clocks(2.3 GHz) and 10% IPC.

Then use those equations for 50% bigger GPU, with 60 CUs, and then for 2x bigger GPU in the mix.

Im sometimes baffled that even if people have those things so plain, in their faces, they fail to see them.
If I ignore the limited bandwidth then Navi 23 with 40CU is only 100*1.25*1.1= 138 or 38% faster than 5700XT which is behind the RTX2080Ti which is 50% faster than 5700XT and yet RTX 3070 should be faster. Either It has more CU or you made a mistake in your calculation(prediction).
BTW this Navi 23 can clock up to 2.3Ghz according to you yet only has 150W TBP and 12Gb Vram?

Now If Navi 22 has 60CU with 2.3Ghz It would be 138*1.5 = 207 or 107% faster (with linear scaling only so in reality slower) than 5700XT which should be close to RTX 3080 performance. All that with only 225TBP, which would mean >100% increase in performance/W compared to RX 5700XT.
Navi 21 with 80CU and 2.3Ghz 207*1.33 = 275.21(also linear scaling used) or 175% faster than RX 5700XT. This could be also close to RTX3090 with>100% better performance/W than RX5700XT.

You wrote how people fail to see things, yet in my humble opinion you are the one who ignores some very important things. How can RDNA2 with higher CU count, higher clockspeed and more Vram have so low TBP and be >100% better in performance /W ratio than It's direct predecessor RX 5700XT and on a similar process? AMD said 50% better performance/W ratio yet you are saying It's more than double of that.
Are you really surprised people are so sceptical about your claims? I am not.

P.S. It's not like I wouldn't like such cards especially the mobile versions would be a killer and I would seriously consider buying a new laptop, but It's simply too good to be true in my opinion.
 
Last edited:

senseamp

Lifer
Feb 5, 2006
34,747
4,619
126
100% perf/watt on same 7nm process?
That would mean their engineers are so incompetent that they left 50% power savings on the table in 5700XT.
Even AMD isn't claiming it. Their own cherry picked metrics are showing 50% improvement, which still means their engineers left 33% power savings on the table even on a brand new Navi architecture, which is not something to brag about either.
 

Glo.

Diamond Member
Apr 25, 2015
4,652
3,281
136
If I ignore the limited bandwidth then Navi 23 with 40CU is only 100*1.25*1.1= 138 or 38% faster than 5700XT which is behind the RTX2080Ti which is 50% faster than 5700XT and yet RTX 3070 should be faster. Either It has more CU or you made a mistake in your calculation(prediction).
BTW this Navi 23 can clock up to 2.3Ghz according to you yet only has 150W TBP and 12Gb Vram?

Now If Navi 22 has 60CU with 2.3Ghz It would be 138*1.5 = 207 or 107% faster (with linear scaling only so in reality slower) than 5700XT which should be close to RTX 3080 performance. All that with only 225TBP, which would mean >100% increase in performance/W compared to RX 5700XT.
Navi 21 with 80CU and 2.3Ghz 207*1.33 = 275.21(also linear scaling used) or 175% faster than RX 5700XT. This could be also close to RTX3090 with>100% better performance/W than RX5700XT.

You wrote how people fail to see things, yet in my humble opinion you are the one who ignores some very important things. How can RDNA2 with higher CU count, higher clockspeed and more Vram have so low TBP and be >100% better in performance /W ratio than It's direct predecessor RX 5700XT and on a similar process? AMD said 50% better performance/W ratio yet you are saying It's more than double of that.
Are you really surprised people are so sceptical about your claims? I am not.

P.S. It's not like I wouldn't like such cards especially the mobile versions would be a killer and I would seriously consider buying a new laptop, but It's simply too good to be true in my opinion.
Firstly. Are you SURE you are not overinflating performance of Ampere GPUs in your calculations?

Secondly. Its already a dead horse, and we still are beating on this topic.

52CU GPU with 1.8 GHz(the same as RTX 5700XT supposedly uses anywhere between 130 and 140W of power, even according to MS's own material.

If we take into account the power draw of 256 Bit GDDR6 memory - around 40W's, we get anywhere between 170 and 180W for a GPU, if the physical design is 1:1 the same for dGPUs. And I won't comment whether its the same, or not.

So effectively you get 30% more CUs than RX 5700 XT, with the same core clocks, using 20% less power. So this already is breaking AMD claims of 50% improvement per watt, with new generation, because this doesn't even include IPC uplifts.

Thirdly. I never said that we will actually get 2.3 GHz on Navi 22. I only said that at 2.3 GHz 60 CU GPU draws around 180W, for the GPU portion, only, based on my info. That is all I knew. Whether this was an experiment by AMD, testing, or we REALLY are getting 2.3 GHz on N22? I don't know.

I personally expect that clock speeds will go down with CU counts going up.

Fourth. AMD Physical design team is right now at RTG. And they are responsible for optimi8zing the design. What the teams at AMD are capable of - look no further than to Renoir.

RDNA2 is Nvidia Maxwell-like step in their GPU architectures.

Its hard to believe that AMD achieved so large gains on the same process? Yes it is hard to believe. But also is showing what good engineering can achieve, if the rumors are to be believed.
 
Last edited:

A///

Senior member
Feb 24, 2017
829
578
106
Probably an announcement in October (along with the new CPUs) and availability in November. Remember Nov 11 is singles day (Double 11) the largest shopping day in the world. Would be nice for them to get something out before then IMO.
Every Friday night is singles day for wine and beer makers, as well as Kleenex. Presentation at the end of this month. Product launch sometime in October. Consoles between Nov 11 to the 27th.
 

Kenmitch

Diamond Member
Oct 10, 1999
8,358
2,050
136
Probably an announcement in October (along with the new CPUs) and availability in November. Remember Nov 11 is singles day (Double 11) the largest shopping day in the world. Would be nice for them to get something out before then IMO.
Funny thing is this is the 1st time I've even heard of November 11th being Singles day.
 
  • Like
Reactions: Tlh97 and Martimus

randomhero

Member
Apr 28, 2020
108
154
76
OK, here goes nothing! :)

Navi23
40 CUs, 2GHz,10,2 TFLOPs, 256bit bus, 16Gt/s ~ 20-25% faster than 5700xt/same as 2080s

Navi22
60CUs, 2GHz, 15,3 TFLOPs, 384bit bus, 16Gt/s ~10-20% faster than 2080ti

Navi21
80CU, 2,2GHz, 22,5 TFLOPs, 2048bit bus, HBM2e 2,4Gt/s ~ 60-70% faster than 2080ti
 

TESKATLIPOKA

Senior member
May 1, 2020
298
315
96
Firstly. Are you SURE you are not overinflating performance of Ampere GPUs in your calculations?

Secondly. Its already a dead horse, and we still are beating on this topic.
At least RTX3070 should be correct and be faster than RTX2080Ti even If It's just by 5%. Even If I am wrong for RTX 3080 and RTX 3090 does It matter when my calculation for RDNA2 is not correct?(linear scaling by adding more CU or the same clockspeed of 2.3Ghz, no bandwidth bottleneck) The performance between RDNA2 and Ampere wasn't even the main issue I had? My main issue is TBP and performance/W ratio.

52CU GPU with 1.8 GHz(the same as RTX 5700XT supposedly uses anywhere between 130 and 140W of power, even according to MS's own material.

If we take into account the power draw of 256 Bit GDDR6 memory - around 40W's, we get anywhere between 170 and 180W for a GPU, if the physical design is 1:1 the same for dGPUs. And I won't comment whether its the same, or not.

So effectively you get 30% more CUs than RX 5700 XT, with the same core clocks, using 20% less power. So this already is breaking AMD claims of 50% improvement per watt, with new generation, because this doesn't even include IPC uplifts.
RX 5700 XT has 1887Mhz as average clockspeed, so It's a bit higher and the process is also a bit better, or not? 30% more CU doesn't mean 30% more performance. IPC is missing as you said. Let's say It's 35% faster than RX 5700 XT so the performance/W is 69% better, which is as you said more than AMD's 50% claim, but It's also nowhere near to what I calculated based on your posts and RDNA2 should be clocked a lot higher than Xbox, which doesn't help the power consumption.

Thirdly. I never said that we will actually get 2.3 GHz on Navi 22. I only said that at 2.3 GHz 60 CU GPU draws around 180W, for the GPU portion, only, based on my info. That is all I knew. Whether this was an experiment by AMD, testing, or we REALLY are getting 2.3 GHz on N22? I don't know.

I personally expect that clock speeds will go down with CU counts going up.
You wrote It like you meant 2.3Ghz for everything.
Here:
Guys, use your logic. Look at RX 5700 XT and its performance per ALU/CU compared to Turing, add redesigned caches, which increases internal bandwidth massively, add 25% core clocks(2.3 GHz) and 10% IPC.

Then use those equations for 50% bigger GPU, with 60 CUs, and then for 2x bigger GPU in the mix.
And I just did that. It "could" be also 2.3 -> 2.1 -> 1.9Ghz and I don't know how accurate is your info about Navi22.

Fourth. AMD Physical design team is right now at RTG. And they are responsible for optimi8zing the design. What the teams at AMD are capable of - look no further than to Renoir.

RDNA2 is Nvidia Maxwell-like step in their GPU architectures.

Its hard to believe that AMD achieved so large gains on the same process? Yes it is hard to believe. But also is showing what good engineering can achieve, if the rumors are to be believed.
Renoir with Vega IGP has very impressive clocks, I agree with that.
Nvidia managed just ~50% better performance/W ratio between Kepler(28nm) -> Maxwell(28nm) and Maxwell(28nm) -> Pascal(16nm)
This is much higher than what nvidia managed so far, but they don't have a CPU division.
If It turns out you were right then great for Amd and customers, If not I won't be very disappointed.
 
  • Like
Reactions: Tlh97

blckgrffn

Diamond Member
May 1, 2003
7,366
636
126
www.teamjuchems.com
No love for the new Series S launch? Targeting 1440p at 120hz with ~same tflops rating as the One X. Seems like they must be pretty confident in the performance scaling on the GPU side - and it begs how much of a hit RT performance will take and how that will impact it as a feature adopted in the PC space. How will RT enabled minecraft run on it? My 9 yo wants to know!

I would be all about it if it had a UHD drive and full HDR support @ $300. Hello new media device and sometimes gaming device. But no... so it's a pass.
 
  • Like
Reactions: lightmanek

TESKATLIPOKA

Senior member
May 1, 2020
298
315
96
1440p at 120hz with only 4Tflops? I am pretty sceptical, unless It's with some sort of upscaling.
How much FPS did One X achieve in the same resolution? BTW XBOX one X has 6TFLOPs.
 
  • Like
Reactions: Tlh97 and Konan

eek2121

Senior member
Aug 2, 2005
729
783
136
I can believe that RDNA2 will clock to 2.5 GHz in very specific scenarios.

I cannot believe in fairy tales that it will clock up to 3 GHz.

I cannot believe that we will get anything else than 2.3 GHz on lower ALU count GPUs, and at best 2.1 GHz on Navi 21.

Even that 2.1 GHz core clock on 80 CU GPU with 384 bit bus will result in 275W TDP and around 90-95% of performance of RTX 3090.
I don’t claim to know where final clocks will land because that depends on CU count along with other factors. However, I can speculate that based on available information along with what I have been told, AMD will be setting some GPU clock speed records. Clock speed is, of course, meaningless without knowing more about the architecture...
Scaling..... Why isn't the XSX higher clocked?
52 CU Xbox Series X GPU @ 1.825Ghz
36 CU PS5 GPU can reach 2.23GHz
Consoles are power limited. The entire console has to consume less than 250W. The GPU in the PS5 and XBSX has a dynamic boost and uses up to 140W of power.

The PC GPUs will be a different beast. There aren’t any CPU cores to compete with, and the total board power of the GPU can be 280W or more. The CUs will also be a slightly different design from the consoles.
 

Elfear

Diamond Member
May 30, 2004
7,030
548
126
Firstly. Are you SURE you are not overinflating performance of Ampere GPUs in your calculations?

Secondly. Its already a dead horse, and we still are beating on this topic.

52CU GPU with 1.8 GHz(the same as RTX 5700XT supposedly uses anywhere between 130 and 140W of power, even according to MS's own material.

If we take into account the power draw of 256 Bit GDDR6 memory - around 40W's, we get anywhere between 170 and 180W for a GPU, if the physical design is 1:1 the same for dGPUs. And I won't comment whether its the same, or not.

So effectively you get 30% more CUs than RX 5700 XT, with the same core clocks, using 20% less power. So this already is breaking AMD claims of 50% improvement per watt, with new generation, because this doesn't even include IPC uplifts.

Thirdly. I never said that we will actually get 2.3 GHz on Navi 22. I only said that at 2.3 GHz 60 CU GPU draws around 180W, for the GPU portion, only, based on my info. That is all I knew. Whether this was an experiment by AMD, testing, or we REALLY are getting 2.3 GHz on N22? I don't know.

I personally expect that clock speeds will go down with CU counts going up.

Fourth. AMD Physical design team is right now at RTG. And they are responsible for optimi8zing the design. What the teams at AMD are capable of - look no further than to Renoir.

RDNA2 is Nvidia Maxwell-like step in their GPU architectures.

Its hard to believe that AMD achieved so large gains on the same process? Yes it is hard to believe. But also is showing what good engineering can achieve, if the rumors are to be believed.
But wouldn't the safe approach (and least likely to get the hype train going) be to assume AMD was telling the truth when they said +50% Perf/W? I would love for RDNA2 to knock it out of the park and give competition to Nvidia at all levels but we have been down this road many, many times. The hype train gets going at ludicrous speed and we all wait with baited breath with credit cards at the ready only to find that the performance increase is much lower than expected. I'd much rather have some realistic expectations and be pleasantly surprised than be disappointed again when the actual product can't live up to the hype.
 
  • Like
Reactions: Tlh97 and Gideon

blckgrffn

Diamond Member
May 1, 2003
7,366
636
126
www.teamjuchems.com
1440p at 120hz with only 4Tflops? I am pretty sceptical, unless It's with some sort of upscaling.
How much FPS did One X achieve in the same resolution? BTW XBOX one X has 6TFLOPs.
Polaris FLOPS vs RDNA2 FLOPS? That was kinda the point I was trying to make. It seems like you get more bang for the buck with the RDNA2 flops... and some of that is going to be supporting RT, right?

The One X was also crippled by being tied to a netbook Jaguar CPU *but* was also touted as this amazing 4K+HDR console. I think that there were a good number of 4k/30 fps titles out there.
 
  • Like
Reactions: Tlh97

ASK THE COMMUNITY