• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Question Speculation: RDNA2 + CDNA Architectures thread

Page 70 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

moinmoin

Platinum Member
Jun 1, 2017
2,767
3,666
136
This is why I have an issue using the consoles to compare and try to evaluate RDNA2 in desktop.
(A screenshot / not mine, but seen and is real from a principal engineer at Sony that has been deleted)

View attachment 30005


I asked a few pages back about scaling....


If we are looking at consoles for speculation then I feel the XSX is the best comparison, specifically the clocks. You got a XSX 52CU part at 1.8ghz. I seriously doubt you will get an RDNA2 desktop card that operates at 2.3-2.5ghz plus as standard all the time. I think we'll get boosts to 2.2ghz ish but not a permanent thing.
As for PS5 I believe it will be powerful and compete well and the design is more of a "cousin" to the XSX. Sure it has speed but maybe that is because it is a cousin and not a full v2 approach.
I also don't think it is effective to take a look at PS5 power for example and extrapolate how a desktop RDNA2 card will be because of this "Hybrid".
Going by the gfx10xy listing, e.g.:
It's in the Linux GPU drivers. So each RDNA product has an identifier along the lines of gfx10XY. The X is like a family of products, the Y is the number of the product in the family - the order in which the design was created. It's fundamentally the same as Navi1X/2X, but only includes dies past a later stage on becoming a real product. That is to say for example, Navi11 - a dead project - does not have a gfx number.

Navi10Lite - gfx1000 (PS5)
Navi14Lite - gfx1001 (Lockhart?)
Navi10 - gfx1010 (5700XT/5700/5600XT)
Navi12 - gfx1011 (Unknown, but 40CUs and HBM2)
Navi14 - gfx1012 (5500XT/5500M/5300M)
Navi21Lite - gfx1020 (Xbox Series X)
Navi21 - gfx1030 (Rumour: ~500mm^2)
Navi22 - gfx1031 (Rumour: ~250mm^2)
Navi23 - gfx1032 (?)
VanGogh - gfx1033
VanGoghLite - gfx1040

So what does all of that mean, well, for starters, anything Lite is semi-custom. That should be fairly obvious given the fact that the PS5 and Xbox SoCs are both there.

Next thing you might notice is the pattern there.

gfx1000, 1020 and 1040 are all used for semi-custom projects.

gfx1010 is RDNA1.

gfx1030 is RDNA2.

It's worth noting that as semi-custom projects they likely have features their gfx names don't let on. For example, the PS5 is closer to RDNA2 in terms of it's performance, given how it can clock so high. Though that might be after future revisions (Oberon, Flute etc). The RTRT functionality was probably brought over from RDNA2 right from the beginning, but no clue as to the rest.
...work on PS5's GPU started along with RDNA1. XSX's joined later along with RDNA2, so it having a more modern base would be natural. So the latter should be more comparable with upcoming consumer RDNA2 GPUs.
 

Paul98

Diamond Member
Jan 31, 2010
3,699
138
106
I have more confidence in AMD executing and not over promising like they have in the past, along with where they are starting from.
They have done a far better job delivering what they promise now than they had when doing GCN revisions.
I can't imagine that they would make the same mistakes they had, especially with what we have seen so far from RDNA1.
They should be able to have a good bit higher transistor density on RDNA2, the 5700xt was quite low from where it could be now.
Unlike on GCN where even the basic math looked bad for trying to scale it up for anything past something like the 290x, RDNA looks to be a far better path forward. Now we just have to wait to see on the execution and if there are any major deficiencies when scaling up.
Now we will have to wait and see obviously, but it seems far more realistic to expect something that should perform well than in the recent past.
 
  • Love
Reactions: spursindonesia

uzzi38

Golden Member
Oct 16, 2019
1,817
3,654
116
This post feels bordering on bait territory, but screw it I'll bite.

I think you diehards on the red side are setting yourself up for disappointment again. A couple of things don't make sense to me with expecting Navi to scale linearly.
1) 5700 was a relatively small chip and performed well and scaled compared to the lower chip. So, why didn't AMD release something bigger?
What would be the point in wasting time and resources on a larger die when you're working to get a new generation out with 50% perf/W uplift within 18 months (and looking like ~15 months at that).

And that's ignoring the fact AMD still definitely did not have a great uArch in terms of efficiency and that 7nm prices were still rather high.

2) They're already on the 7nm, so it's not even like they're moving up a node and have to do everything new.
Yeah, just like Nvidia did with Maxwell.

Huge improvements on the same node are most definitely feasible.


3) Does AMD hate money? Why sit on producing a 2080ti competitor all these months (year?) when the price was so high? If they can make a 5800xt that's around the 2080ti, do they think there will be no demand if it's $1000 a year ago? 6 months ago?
This is literally the same as your first point. Like I said, it wouldn't have been very competitive launching even later than Navi10 did and closer to Ampere. Navi10 was already too late for my liking honestly.
It's the same reason I argue against Nvidia pulling a "Super" on 7nm. It'd be a waste of time and resources as while they put manpower into a GPU that will have a minor affect on the market, their competitor would be well on their way to their next generation products and ready to absolutely clap them with real improvements.

4) New versions of Ryzen CPU seems to come out all the time, so it's not like the company have a problem with customers doing frequent upgrades.
You mean all the ones using the exact same die?

Lol.

If this is true, then AMD really should think about changing the marketing team, cause they are idiots. Now the most they can charge for a 2080ti competitor is $500 and $700 for a 3080 competitor and it wouldn't come out until at the end of October at the earliest, giving NV another month to monopolize GPU sales. They screwed their shareholders and board partners out of tons of money and consumers not having any competition on the high end. The 3080 memes shouldn't be about 2080ti owners, it should be on AMD, cause big Navi value just dropped in 1/2. WTF AMD?
Yes, I'm sure it's AMD's marketing that decided when their products would be ready for launch.
 

Mopetar

Diamond Member
Jan 31, 2011
6,180
3,010
136
Without knowing what RDNA2 is its difficult to say how far or close it is to whatever is in the PS5. However that's just one console and the Xbox seems to be close to RDNA2 based on what's been said publicly.

But I'm not sure it really matters. I think we should assume that AMD wouldn't make RDNA2 worse than what's in either console which is some kind of hybrid product to whatever degree. Console comparisons are just a baseline from which AMD could improve in that case.
 

Tup3x

Senior member
Dec 31, 2016
536
397
136
This is why I have an issue using the consoles to compare and try to evaluate RDNA2 in desktop.
(A screenshot / not mine, but seen and is real from a principal engineer at Sony that has been deleted)

View attachment 30005


I asked a few pages back about scaling....


If we are looking at consoles for speculation then I feel the XSX is the best comparison, specifically the clocks. You got a XSX 52CU part at 1.8ghz. I seriously doubt you will get an RDNA2 desktop card that operates at 2.3-2.5ghz plus as standard all the time. I think we'll get boosts to 2.2ghz ish but not a permanent thing.
As for PS5 I believe it will be powerful and compete well and the design is more of a "cousin" to the XSX. Sure it has speed but maybe that is because it is a cousin and not a full v2 approach.
I also don't think it is effective to take a look at PS5 power for example and extrapolate how a desktop RDNA2 card will be because of this "Hybrid".
I wouldn't be so surprised if Xbox turned out to be closer to PC RDNA2. It's likely that ~1.8 GHz is the sweet spot and after that the perf/w starts to deteriorate quickly. Also no one knows how things scale past 52CU. I'd imagine ~2 GHz being realistic (that's one way to make use of that improved perf/W - clock it higher before things go out of hand) and definitely closer to 2 GHz than 2,5 Ghz.
 
  • Like
Reactions: Konan

Glo.

Diamond Member
Apr 25, 2015
4,835
3,457
136
If we are looking at consoles for speculation then I feel the XSX is the best comparison, specifically the clocks. You got a XSX 52CU part at 1.8ghz. I seriously doubt you will get an RDNA2 desktop card that operates at 2.3-2.5ghz plus as standard all the time. I think we'll get boosts to 2.2ghz ish but not a permanent thing.
If it is inbetween RDNA1 and RDNA2 with physical optimisation, without architectural advancements that increase core clocks, then I think RDNA2 shapes in way better form, than we initially thought ;).

It may also tell why Paul from RedGamingTech in one of videos discussing PS5 said that PS5 had problem clocking past 2.25 GHz, because of inherent nature of CU design, which would not allow clocking past this, without problems. That is interesting point of view.

I wouldn't worry about it. Less efficient architecture, like Vega in Renoir APUs clocks to 2.1 GHz and OC's to 2.4 GHz, without problem, in a design that has 60 mln xTors/mm2(and as we know, the more xTors/mm2 the bigger problem to clock high)
 

Kenmitch

Diamond Member
Oct 10, 1999
8,502
2,244
136
I think you diehards on the red side are setting yourself up for disappointment again. A couple of things don't make sense to me with expecting Navi to scale linearly.
1) 5700 was a relatively small chip and performed well and scaled compared to the lower chip. So, why didn't AMD release something bigger?
2) They're already on the 7nm, so it's not even like they're moving up a node and have to do everything new.
3) Does AMD hate money? Why sit on producing a 2080ti competitor all these months (year?) when the price was so high? If they can make a 5800xt that's around the 2080ti, do they think there will be no demand if it's $1000 a year ago? 6 months ago?
4) New versions of Ryzen CPU seems to come out all the time, so it's not like the company have a problem with customers doing frequent upgrades.

If this is true, then AMD really should think about changing the marketing team, cause they are idiots. Now the most they can charge for a 2080ti competitor is $500 and $700 for a 3080 competitor and it wouldn't come out until at the end of October at the earliest, giving NV another month to monopolize GPU sales. They screwed their shareholders and board partners out of tons of money and consumers not having any competition on the high end. The 3080 memes shouldn't be about 2080ti owners, it should be on AMD, cause big Navi value just dropped in 1/2. WTF AMD?
Take a deep breath and slowly exhale. You might need to do this a couple of times to feel the true effect.

The lapse from Navi to Big Navi is what a little over a year? I'd imagine the focus was on console development and Zen 3.

Maybe it's just me, but I believe Lisa Su knows what's best for AMD in the long run.
 

DJinPrime

Member
Sep 9, 2020
87
89
51
This post feels bordering on bait territory, but screw it I'll bite.
How is this bait? All I gave were historical facts, and question why there's an expectation that big Navi will just scale perfectly.

What would be the point in wasting time and resources on a larger die when you're working to get a new generation out with 50% perf/W uplift within 18 months (and looking like ~15 months at that).

And that's ignoring the fact AMD still definitely did not have a great uArch in terms of efficiency and that 7nm prices were still rather high.
$$$ is the reason, 5700 is a small chip, just add 50% more core if things are scaling so wonderfully. It's pretty efficient if the claim about scaling is true. You're making way more money with a bigger chip. And more importantly you gain mind shares and increase stock price. So, again $$$$$$$$$$$$.

Huge improvements on the same node are most definitely feasible.
Who's talking about uArch improvement? Since everyone seems to shit on RT, what new tech did AMD need to add? Just make it faster, since you have great scale. Just add CUs.


This is literally the same as your first point. Like I said, it wouldn't have been very competitive launching even later than Navi10 did and closer to Ampere. Navi10 was already too late for my liking honestly.
It's the same reason I argue against Nvidia pulling a "Super" on 7nm. It'd be a waste of time and resources as while they put manpower into a GPU that will have a minor affect on the market, their competitor would be well on their way to their next generation products and ready to absolutely clap them with real improvements.
Super was not a waste of time, I'm sure most 2000 series sold are the supers. You don't think having a 2080 super competitor is valuable? People keep bringing up how well Navi scale, so why was it hard to release a bigger Navi? Cause it doesn't scale as well as you think, or management is really stupid.

You mean all the ones using the exact same die?

Lol.
What?? We gone through Ryzen, Ryzen plus, Ryzen 2, and now Ryzen 3 in less than 4 years. Are you kidding? They're not the same die. Each release have some really nice IPC improvements.

Yes, I'm sure it's AMD's marketing that decided when their products would be ready for launch.
Exactly, that means technology wise, it's not as simple as many of the claims here are making. If it was that simple and things wonderfully scale so well, then the only logical conclusion is that their marketing and management is ultra stupid.
 

Timorous

Senior member
Oct 27, 2008
670
764
136
XB Series X is show nearly identical transistor density to 5700 XT, so I wouldn't expect a big change.
That means nothing though. MS could have done that as the lower density version was the best price/performance/yield/power density compromise for their requirements.

We already know 7nm can make huge 60M+/mm² dies because GA100 is built on it with 58B transistors.
 
  • Like
Reactions: Tlh97 and Mopetar

Saylick

Golden Member
Sep 10, 2012
1,056
966
136


Is it possible to get a die size rough estimate just scaling off of the area of the surface-mounted devices on the back side of the die and comparing it to the PCI-e slot length?

Just curious if this aligns with a die size around the 500mm2 range.

EDIT: Okay, my rough math gets me around 25mm x 30mm for the back side of die. Seems like 500mm2 is reasonable.
 
Last edited:

Krteq

Senior member
May 22, 2015
969
643
136
Wait, so you are trying to say that Kepler -> Maxwell transition on same process (TSMC 28nm) was some kind of miracle?

In the early RDNA 2 design stages, Suzanne Plummer's "Zen team" was invited to RTG and involved in optimization works on future uarchs. You can see their results on semicustom XSX/PS5 SoCs and Renoir Vega GPU power optimizations.

Are you really trying to tell us that AMD is not able to create/design anything new? Well, try to check patents submitted by AMD for last 4 years ;)
 

Karnak

Senior member
Jan 5, 2017
219
306
136
Well, redgamingtech's "infinity cache" sounds more and more realistic IMO since I trust rogame.

16GB = 256bit + Infinity cache.

I think 512bit is just too much and since the engineering board was 256bit with GDDR...
 

Krteq

Senior member
May 22, 2015
969
643
136
Well, redgamingtech's "infinity cache" sounds more and more realistic IMO since I trust rogame.

16GB = 256bit + Infinity cache.
Sorry, I had to miss something.

Legit question - how exactly is that "infinity cache" supposed to work?
 

gk1951

Member
Jul 7, 2019
163
149
86
Take a deep breath and slowly exhale. You might need to do this a couple of times to feel the true effect.

The lapse from Navi to Big Navi is what a little over a year? I'd imagine the focus was on console development and Zen 3.

Maybe it's just me, but I believe Lisa Su knows what's best for AMD in the long run.
Totally agree.

Dr. Su has Zen 3 coming out first to show how far Ryzen has come. This also buys time for big navi to get some polish. She is building out the stack. Let Nvidia have it's day in the sun. I suspect Big Navi will beat the RTX 3070 but fall a bit short of the RTX 3080 (which as I write this is sold out EVERYWHERE)

I'm looking to stay with AMD to upgrade my RAD VII on my 3900x rig.

I'll wait till RDNA2 drops and see how much difference there is in game play.
 

jpiniero

Lifer
Oct 1, 2010
10,098
2,360
136
Don't know if this was mentioned, but I saw that the PS5's power supply is rated 350 W, 340 W for the Digital Edition. Series X is 310 W I think.
 

Konan

Senior member
Jul 28, 2017
360
291
106
Sorry, I had to miss something.

Legit question - how exactly is that "infinity cache" supposed to work?
RedGamingTech is peddling that AMD is using some sort of "Breaththrough memory cache system" a.k.a 'Infinity Cache' that does not require more vram bandwidth. i.e. RDNA2 uses a huge cache system, called Infinity Cache, that could be 128 MB (or 64) in size.

Others speculate that this larger cache could make sense because it would be a set up for RDNA3 MCM as well as lessen inter-core performance hits.
With many people expecting RDNA2 to have a large increase in compute power you would think that a wider memory system is needed in order to provide more data to the shaders.

Current Navi today has a 256 bit bus, so max memory bandwidth doesn't look like its increasing much, if at all. Without some other piece of information or technology, this looks like the additional shaders will be underfed.

The larger cache speculation mixed with pixie dust :innocent: is speculated to overcome the limitations of 256bit bus wrapped up all nicely in a package.
 

DJinPrime

Member
Sep 9, 2020
87
89
51
Take a deep breath and slowly exhale. You might need to do this a couple of times to feel the true effect.

The lapse from Navi to Big Navi is what a little over a year? I'd imagine the focus was on console development and Zen 3.

Maybe it's just me, but I believe Lisa Su knows what's best for AMD in the long run.
I guess you all are still missing my point. If Navi scales really well, then AMD had no reason to just release the 5700 and 5700xt over a year ago. In development, the engineers should know how well their design scales. Since 5700 was such as small chip, and if scaling was not an issue, then just making a bigger chip would not be that much effort. Why would you limit yourself to 2070 level performance? Do you not want to be known as a high performance company and sell chips? Why wait 1.5 years to make money when you can make money now? Things change, like the competition releasing a $700 card that is faster than the current $1200 card. You never sandbag yourself.
I'm a software developer, so I can only explain from that point of view. I have an app that process multiple requests, and things seems to work really well when the number of concurrent request is under 10. Performance wise, it seems to scale really well. But as soon as I am in a scenario where there are 11 concurrent requests, I start seeing issues. I start seeing locking timeouts, data being overwritten by parallel process, memory heap issues, performance issues. So, my program is only scalable upto a certain point. It's a leap to assume big Navi scales the same way as 5500 to 5700.
If Big Navi does end up performing better than the 3080, it's because the engineers had to do a lot of awesome work to get around the problems of RDNA1, just like how I would have to redesign my program to be able to scale above 10.
 

ASK THE COMMUNITY