Question Speculation: RDNA2 + CDNA Architectures thread

Page 94 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

uzzi38

Platinum Member
Oct 16, 2019
2,705
6,427
146
All die sizes are within 5mm^2. The poster here has been right on some things in the past afaik, and to his credit was the first to saying 505mm^2 for Navi21, which other people have backed up. Even still though, take the following with a pich of salt.

Navi21 - 505mm^2

Navi22 - 340mm^2

Navi23 - 240mm^2

Source is the following post: https://www.ptt.cc/bbs/PC_Shopping/M.1588075782.A.C1E.html
 

kurosaki

Senior member
Feb 7, 2019
258
250
86
They don't though. But yeah, there's no real reason to complain about DLSS. It improves speed with minimal visual quality loss and in some cases it is an improvement.


*Link to previous discussion*

*Link to previous discussion*

*Link to previous discussion*

I think the words "minimal quality loss" are extremely subjective here, as are the words "when every one thinks it's a great feat, you have to re-evaluate"
Objectivity the IQ worsens. If some people don't care, fine, turn on DLSS. But it is still a total waste of die space. AMD has had upscaling for a long time now. Good, let those who want, run the games like that. Just feels like a waste of money. If people don't care if games looks like they ran on a potato, why not just buy a 50usd GPU and crank the Res down to 720p? They still don't seem to see the difference on their TN-panels...
 
Last edited:

jamescox

Senior member
Nov 11, 2009
644
1,105
136
In my opinion AMD can't lauch anything for more that $800-900. They must know that people will not buy a AMD gaming card for $1000. They are not Nvidia.
Almost no one buys ridiculously high priced Nvidia cards for gaming. They are “marketing cards” as far as gaming is concerned. Nvidia seems to have crippled the 3090 for non-gaming applications, so I don‘t really expect them to sell hardly any of them. The thing that convinces me that AMD has a great architecture this round is Nvidia‘s 8K gaming non-sense for the 3090. Makes me wonder if they are going to be competitive at 4K.

After Threadripper dominating the HEDT space for CPUs, you may have to get used to AMD being the higher performance and high priced option. We don’t know if they have replicated their success with Zen in the GPU market yet, but Nvidia‘s 30 series launch doesn’t seem to be going that well. It has been very interesting and seems to be getting more interesting every day.
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
Almost no one buys ridiculously high priced Nvidia cards for gaming. They are “marketing cards” as far as gaming is concerned. Nvidia seems to have crippled the 3090 for non-gaming applications, so I don‘t really expect them to sell hardly any of them. The thing that convinces me that AMD has a great architecture this round is Nvidia‘s 8K gaming non-sense for the 3090. Makes me wonder if they are going to be competitive at 4K.

I know it makes very little actual sense but there's little arguing with demonstrated reality. There's a dedicated section of the market who buy these things. Don't ask me why.

The 8k is just some sort of silly marketing attempt to differentiate the 3090 from the 3080. The huge 3090 die will sell very well indeed for deep learning workstations etc, even if it has to be as a Pro card, so NV are happily covered.

After Threadripper dominating the HEDT space for CPUs, you may have to get used to AMD being the higher performance and high priced option. We don’t know if they have replicated their success with Zen in the GPU market yet, but Nvidia‘s 30 series launch doesn’t seem to be going that well. It has been very interesting and seems to be getting more interesting every day.

This is where people start getting a bit ahead of themselves. Zen was resulted from a huge, multi year, investment while Intel ran into endless process trouble. Here, they've had one years work since the 5700/XT, only mild process improvements, and they've had to jam ray tracing into the cards while they're at it - as Turing showed that isn't free.

Vs a half decent die shrink & new architecture, you would, a priori, have expected them to drop back from last years position.

Perhaps they've done well enough to hold the line, perhaps a bit better, perhaps slightly worse. Honestly that isn't critical here.

What's critical is that they do a fast, organised, refresh of their entire GPU stack. Ideally competitive mobile GPU's into the bargain. That'll show that they've got/are putting in the resources to take it fully seriously. Frankly its been far too long since they've been able to do this.

If they're doing that then, yes, we'll have a rather competitive market again.
 
  • Like
Reactions: Cableman

Glo.

Diamond Member
Apr 25, 2015
5,803
4,777
136
About Navi 23 die.

If Navi 22 has 40CUs/192 bit bus, and it is going to be used for X700 SKUs, no way in hell, Navi 23 a 32 CU/128 bit bus GPU will be used for anything else than X600 SKUs.

So either Navi 24 is the die that will sit below it, or... AMD will refresh Navi 10 dies as 6500 SKUs.
 

Shivansps

Diamond Member
Sep 11, 2013
3,875
1,530
136
It's shame, but you're wrong.
RTG needs to at least announce upcoming support via update to DirectX ML to offer image upscale. Because DLSS don't need to be perfect, Nvidia just needs to convince the public (with some help of the press) that it's good enough and that they need to use for magical performance improvement. Mindshare is all that it's needed to not have image upscale become a "fatal" flaw of RDNA2 cards.

Its not about being magical or Nvidia convincing the public that it needs it, its about LOGIC, think about it, i have a 4K monitor the extra resolution helps me to work, but when im gaming... i cant play at 4K with a RX570... so in order to avoid changing resolution or using games at fullscreen, i had to settle for 1440p as desktop resolution and... yes im playing at 1440p with a RX570 and bordeless screen, for 1080P im forced to go fullscreen. In an ideal world all games would have diferent UI and render resolutions, but this is not an ideal world.

If the RX570 had DLSS i would be able to use 4K resolution and play at 1080P upscaled. This also helps people wanting to play with very high fps in monitors with higher resolution than 1080p.
It is still a issue because games have to support it, but its something.

Whats important here is: as AMD is in console wharever system they implement it probably ends up being the widely used option.
 
Last edited:
  • Like
Reactions: Cableman

Thala

Golden Member
Nov 12, 2014
1,355
653
136
*Link to previous discussion*

*Link to previous discussion*

*Link to previous discussion*

I think the words "minimal quality loss" are extremely subjective here, as are the words "when every one thinks it's a great feat, you have to re-evaluate"
Objectivity the IQ worsens. If some people don't care, fine, turn on DLSS. But it is still a total waste of die space. AMD has had upscaling for a long time now. Good, let those who want, run the games like that.

You are wrong in two aspects. First DLSS can improve IQ - there are lots of examples published by reviewers. Second none of the upscaling solutions offered by AMD comes anywhere close to DLSS(2.0) as far is IQ is concerned.
I not even consider buying an AMD solution unless they have a comparable upscaling technology like DLSS - it is just that much of a gamechanger in my opinion.
 

PhoBoChai

Member
Oct 10, 2017
119
389
106
You are wrong in two aspects. First DLSS can improve IQ - there are lots of examples published by reviewers. Second none of the upscaling solutions offered by AMD comes anywhere close to DLSS(2.0) as far is IQ is concerned.
I not even consider buying an AMD solution unless they have a comparable upscaling technology like DLSS - it is just that much of a gamechanger in my opinion.

He's wrong, with evidence presented? Look at the comparison image.

DLSS 2 has artifacts, its not a simple case of superior IQ. It's a compromise, and it's up to individuals whether they want those compromises.

The fact that DLSS 2 relies on motion vectors to function should already inform you guys that it can be prone to errors for objects that lack motion vectors. You only have to compare more carefully in Death Stranding to see all the artifacts it generates.

This entire "better than native" is a terrible meme, due to native being blurred by TAA. Use TSSAA8x like in idTech engine and there's zero chance for DLSS 2 to even look as good as native.
 

Glo.

Diamond Member
Apr 25, 2015
5,803
4,777
136
You are wrong in two aspects. First DLSS can improve IQ - there are lots of examples published by reviewers. Second none of the upscaling solutions offered by AMD comes anywhere close to DLSS(2.0) as far is IQ is concerned.
I not even consider buying an AMD solution unless they have a comparable upscaling technology like DLSS - it is just that much of a gamechanger in my opinion.
Yeah.

Superior image quality by lowering image quality.

Guys. DLSS objectively LOWERS image quality. How can it give superior image quality than native rendering?

I didn't wanted to chime in, on the topic of DLSS, because I could not care less about it, but people claiming that by lowering Image Quality you achieve better Image Quality is a logical fallacy.

Only possible by Nvidia's marketing, I guess.
 

kurosaki

Senior member
Feb 7, 2019
258
250
86
Yeah.

Superior image quality by lowering image quality.

Guys. DLSS objectively LOWERS image quality. How can it give superior image quality than native rendering?

I didn't wanted to chime in, on the topic of DLSS, because I could not care less about it, but people claiming that by lowering Image Quality you achieve better Image Quality is a logical fallacy.

Only possible by Nvidia's marketing, I guess.

Bu, bu buut. But AI! And Tensor cores!!! Nvidia told us It's making the pictures looking even better than the original IQ! I've seen a youtuber post such things!
 

dzoni2k2

Member
Sep 30, 2009
153
198
116
He's wrong, with evidence presented? Look at the comparison image.

DLSS 2 has artifacts, its not a simple case of superior IQ. It's a compromise, and it's up to individuals whether they want those compromises.

The fact that DLSS 2 relies on motion vectors to function should already inform you guys that it can be prone to errors for objects that lack motion vectors. You only have to compare more carefully in Death Stranding to see all the artifacts it generates.

This entire "better than native" is a terrible meme, due to native being blurred by TAA. Use TSSAA8x like in idTech engine and there's zero chance for DLSS 2 to even look as good as native.

DLSS is better than native, Ampere is 4x faster in RT and perf/W went up 1.9x. Guy in a black leather jacket said so, so it's definitely true!
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
Its not about being magical or Nvidia convincing the public that it needs it, its about LOGIC, think about it, i have a 4K monitor the extra resolution helps me to work, but when im gaming... i cant play at 4K with a RX570... so in order to avoid changing resolution or using games at fullscreen, i had to settle for 1440p as desktop resolution and... yes im playing at 1440p with a RX570 and bordeless screen, for 1080P im forced to go fullscreen. In an ideal world all games would have diferent UI and render resolutions, but this is not an ideal world.

If the RX570 had DLSS i would be able to use 4K resolution and play at 1080P upscaled. This also helps people wanting to play with very high fps in monitors with higher resolution than 1080p.
It is still a issue because games have to support it, but its something.

Whats important here is: as AMD is in console wharever system they implement it probably ends up being the widely used option.

AMD has had image upscaling for years. Why not use it in this case? They even have RIS (Radeon Image Sharpening) to clean up the upscaled image, and it objectively looks better than DLSS in some cases, but without the performance hit that DLSS has. Your situation is the exact use case for using upscaling.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
You are wrong in two aspects. First DLSS can improve IQ - there are lots of examples published by reviewers. Second none of the upscaling solutions offered by AMD comes anywhere close to DLSS(2.0) as far is IQ is concerned.
I not even consider buying an AMD solution unless they have a comparable upscaling technology like DLSS - it is just that much of a gamechanger in my opinion.

You can never improve original art. Sure, you can change it. And some people may like the new look. But that doesn't mean it is better.

If somebody 'reworked' the Mono Lisa, there would be a minority of people that would claim up and down that the new version was better. But that doesn't mean it would be.
 

Konan

Senior member
Jul 28, 2017
360
291
106
IMO

AMD will need a comprehensive solution to compete in RT games against NV. The jury is still out until all the reviewers are going to test against it, which is going to happen. DX12U will be on XSX but not PS5.
Additionally, a DLSS competitor will also be needed. What AI training is there? will there be a competing solution? Hopefully...

“I think ray tracing is an important technology, and it’s something we’re working on as well, both from a hardware and software standpoint,” Su said
Later, Su expanded on her thought. “I don’t think we should say that we are ‘waiting,’” Su said, in response to this reporter’s question. “We are deep in development, and that development is concurrent between hardware and software.”
“The consumer doesn’t see a lot of benefit today because the other parts of the ecosystem are not ready,” Su added.
 

Zstream

Diamond Member
Oct 24, 2005
3,395
277
136
I'm not sure what you gents are complaining about. The Radeon Image Sharpening is a pretty dang good tool, just like DLSS is. For anyone saying DLSS is bad, they're just a picking a side and arguing for the sake of it.

I'm sure it would be easy to implement RIS 2.0 (whatever that may be). https://www.techspot.com/article/1873-radeon-image-sharpening-vs-nvidia-dlss/#allcomments

It's understood that purist are amongst us, but something needs to change. I just wish Physx was usable on the GPU front for AMD, and impacted gameplay. That to me is a better use and function verse a RT core, or dedicated hardware set to do RT.
 
  • Like
Reactions: beginner99

Geranium

Member
Apr 22, 2020
83
101
61
You can never improve original art. Sure, you can change it. And some people may like the new look. But that doesn't mean it is better.

If somebody 'reworked' the Mono Lisa, there would be a minority of people that would claim up and down that the new version was better. But that doesn't mean it would be.
When you compare something with TAA, that something is always better than TAA. Here that something is DLSS.
If I am not wrong that games that supports DLSS only has TAA or poor implementation of some AA. In that that way DLSS will automatically look better than native resolution.
 

nurturedhate

Golden Member
Aug 27, 2011
1,767
773
136
If somebody 'reworked' the Mono Lisa, there would be a minority of people that would claim up and down that the new version was better. But that doesn't mean it would be.

No, it more like someone took a picture of the Mona Lisa w/ an old camera phone at low res, used a snapchat filter to add cat ears and glitter, printed it out, hung it next to the original and then started screaming at everyone who didn't believe the it was better than the original...
 
Last edited:

stebler

Member
Sep 10, 2020
25
75
61
Somebody asked and you answered this already on reddit, but I'd like to second the request reading this data from earlier macOS versions to compare how much changed, if at all.

Update:

I checked on macOS 10.15.1, which was released in October 2019.

It contains discovery firmware for Navi 14, Navi 12, Navi 21, Navi 21 Lite and Van Gogh.

For Navi 14 and Navi 12, they are identical to the ones from macOS 11 beta 6.

Navi 21 and Van Gogh use an older format (the same as the Navi 1x family) for the gc_info structure, but otherwise the values are the same.

Navi 21 Lite (Xbox Series X) has 56 CUs and 20 tccs as expected. I will add it to the table in the Reddit post.

I also want to reiterate that the powerplay tables are identical between macOS and Linux.
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
I'm not sure what you gents are complaining about. The Radeon Image Sharpening is a pretty dang good tool, just like DLSS is. For anyone saying DLSS is bad, they're just a picking a side and arguing for the sake of it.

I'm sure it would be easy to implement RIS 2.0 (whatever that may be). https://www.techspot.com/article/1873-radeon-image-sharpening-vs-nvidia-dlss/#allcomments

For some values of ‘easy’ that is. If you look at what DLSS is doing, there’s a huge amount going on.

Most of the underlying deep learning research is public domain of course but probably not practically trivial to apply it right. It took NV a fair while to get it right and they‘re doing lots of DL these days.

I’m not sure if AMD could easily replicate the back end training work themselves. Microsoft could definitely have done you’d think.

The other potential barrier is also that it simply doesn’t make sense from a performance point of view to use DLSS unless you have the tensor cores in your GPU architecture. There’s a non trivial cost to including those of course.

NV have them in to sell their cards for various deep learning situations, so in some ways DLSS is them compensating for this. No point in a console of course and that’s where RDNA2 is coming from.

It's understood that purist are amongst us, but something needs to change. I just wish Physx was usable on the GPU front for AMD, and impacted gameplay. That to me is a better use and function verse a RT core, or dedicated hardware set to do RT.

The RT stuff could get somewhat annoying in the next few years. The consoles RT performance is going to set the basic level of effects most games see, and you have to suspect they’ll probably be mostly superfical.

Oh well :)
 

ModEl4

Member
Oct 14, 2019
71
33
61
I think anyone saying that an AI enchanted lower res image can't be perceivably as good as a higher resolution image, is fighting the inevitable. AI assisted image improvements is the way forward and Nvidia's is just the first tiny step in a Marathon. In less than 4 years from now, probably we will have the first examples of how a 4K AI assisted image will have unquestionably better perceivable Image quality than a 8K version in certain crucial key aspects for the majority of people comparing them. Regarding DLSS, maybe is not perfect but certainly is years ahead of what the competition is offering with Navi1. Already if you check an implementation like in Control you can see that QHD DLSS (4K quality) can have certain advantages in relation with 4K, check the Digital Foundry video, for me the clarity and the preservation of details in the faces for example, heavily outweights the faint ringing artifact (Of course is a matter of taste, and depends on TAA implementation etc) Read the article at Coreteks, although a little anti-Nvidia and Raja will save us all etc, it is very interesting read.

EDIT : spelling corrections
 
Last edited:

DJinPrime

Member
Sep 9, 2020
87
89
51
No, it more like someone took a picture of the Mona Lisa w/ an old camera phone at low res, used a snapchat filter to add cat ears and glitter, printed it out, hung it next to the original and then started screaming at everyone who didn't believe the it was better than the original...
Your analogy is so far off base though, DLSS is no where near that bad in both changing the quality or what the image is. And Stuka87, taking your analogy of the Mona Lisa. The reason why people are excited about DLSS is because it allows them to enjoy higher quality settings and framerates (Mona Lisa) without having the hardware (money, there's only 1 Mona Lisa) with minor image quality trade off (buying a print of the Mona Lisa and hanging it in your home). Everyone knows it's a print but it still looks really nice and I can enjoy it at home. And I wonder what monitors you guys run, because if you don't have a HDR1000 capable, you have no idea what visual qualities you're missing out on.
 
  • Haha
Reactions: pcp7

insertcarehere

Senior member
Jan 17, 2013
639
607
136
No, it more like someone took a picture of the Mona Lisa w/ an old camera phone at low res, used a snapchat filter to add cat ears and glitter, printed it out, hung it next to the original and then started screaming at everyone who didn't believe the it was better than the original...

I take it you believe the only worthwhile AA is SSAA? Because just about every other AA method (esp. post-processing AA) also "reworks" images in a somewhat similar fashion that arguably does not represent the native render in a "perfect" manner.
 

nurturedhate

Golden Member
Aug 27, 2011
1,767
773
136
Your analogy is so far off base though, DLSS is no where near that bad in both changing the quality or what the image is. And Stuka87, taking your analogy of the Mona Lisa. The reason why people are excited about DLSS is because it allows them to enjoy higher quality settings and framerates (Mona Lisa) without having the hardware (money, there's only 1 Mona Lisa) with minor image quality trade off (buying a print of the Mona Lisa and hanging it in your home). Everyone knows it's a print but it still looks really nice and I can enjoy it at home. And I wonder what monitors you guys run, because if you don't have a HDR1000 capable, you have no idea what visual qualities you're missing out on.
I take it you believe the only worthwhile AA is SSAA? Because just about every other AA method (esp. post-processing AA) also "reworks" images in a somewhat similar fashion that arguably does not represent the native render in a "perfect" manner.
My issue is with the presentation not the tool. As a tool, an option it's perfectly viable. People saying it produces something more accurate than source though...
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
My issue is with the presentation not the tool. As a tool, an option it's perfectly viable. People saying it produces something more accurate than source though...

Well, pedantically it can because the source images they train on are very high resolution.

Obviously most often they mean better than standard anti aliasing.