Question Speculation: RDNA2 + CDNA Architectures thread

Page 179 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

uzzi38

Platinum Member
Oct 16, 2019
2,635
5,984
146
All die sizes are within 5mm^2. The poster here has been right on some things in the past afaik, and to his credit was the first to saying 505mm^2 for Navi21, which other people have backed up. Even still though, take the following with a pich of salt.

Navi21 - 505mm^2

Navi22 - 340mm^2

Navi23 - 240mm^2

Source is the following post: https://www.ptt.cc/bbs/PC_Shopping/M.1588075782.A.C1E.html
 

scineram

Senior member
Nov 1, 2020
361
283
106
As I wrote earlier, "My favorite leaker is David Wang, but he is mostly ignored by everyone".In his presentation from March 2020, well we can see this slide.When the rumors about some weird 128mb Infinity Cache started, most people have completely forgotten about this "green nonsense".Can we logically connect this green stuff with early Infinity Cache rumors, hm judge for yourself.


View attachment 32737

No.
 

Mopetar

Diamond Member
Jan 31, 2011
7,848
6,015
136
It didn't for Renoir. Clocks as high as Matisse and the vega cores clock far higher than radeon 7.

Vega in the APUs almost seems like a pretty different design that's reused the same name. It's not too much different from RDNA2 still using the Navi name despite itself having some radically different design aspects than RDNA1.

APUs suffer the same density issue where the average density is useless given that the different parts of the SoC will themselves have drastically different density.

Hell, look at NVidia where the A100 is 1.4 times more dense than consumer Ampere. That's not just down to using TSMC vs. Samsung, but because the clock speeds would be lower allowing for a more dense design. You can pack the transistors more tightly when you know that each won't be generating as much heat.
 

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
Hell, look at NVidia where the A100 is 1.4 times more dense than consumer Ampere. That's not just down to using TSMC vs. Samsung, but because the clock speeds would be lower allowing for a more dense design. You can pack the transistors more tightly when you know that each won't be generating as much heat.

GA100 has way more cache than GA102 and cache usually is very dense.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
Vega in the APUs almost seems like a pretty different design that's reused the same name. It's not too much different from RDNA2 still using the Navi name despite itself having some radically different design aspects than RDNA1.

APUs suffer the same density issue where the average density is useless given that the different parts of the SoC will themselves have drastically different density.

Hell, look at NVidia where the A100 is 1.4 times more dense than consumer Ampere. That's not just down to using TSMC vs. Samsung, but because the clock speeds would be lower allowing for a more dense design. You can pack the transistors more tightly when you know that each won't be generating as much heat.

The differences between Vega desktop and Vega APU is in the memory interfaces. The ISA between the two Vega's are the same as far as I know. And of course once you clock Vega down, it becomes super efficient, unlike the desktop part that was pushed to the max from the get go.
 
  • Like
Reactions: Tlh97

Shivansps

Diamond Member
Sep 11, 2013
3,855
1,518
136
The differences between Vega desktop and Vega APU is in the memory interfaces. The ISA between the two Vega's are the same as far as I know. And of course once you clock Vega down, it becomes super efficient, unlike the desktop part that was pushed to the max from the get go.

That may be true for Raven and Picasso, altrought the freqs used there are similar to a Vega 56. Renoir seems to have some internal changes, the ROP count seems to be half of Picasso to start with.

Desktop Vega ran at way too high vcore to increase yields.
 
  • Like
Reactions: lightmanek

maddie

Diamond Member
Jul 18, 2010
4,749
4,691
136
The differences between Vega desktop and Vega APU is in the memory interfaces. The ISA between the two Vega's are the same as far as I know. And of course once you clock Vega down, it becomes super efficient, unlike the desktop part that was pushed to the max from the get go.
ISA doesn't force all designs to be exactly similar at the physical circuitry level.
 
  • Like
Reactions: Mopetar

kurosaki

Senior member
Feb 7, 2019
258
250
86
12GB of VRAM used at 4K, feels bad for the RTX 3080 buyers (all 3 of them!)
This is quite a crazy generation to be honest. either we are stuck at a memory too slow, or its a custom made, expensive solution. Or just an expensive solution.
Both teams have made quite some work and or compromises on memory this time around and I got a feeling AMD got out alive this time. :D
 
  • Like
Reactions: lightmanek

PhoBoChai

Member
Oct 10, 2017
119
389
106
12GB of VRAM used at 4K, feels bad for the RTX 3080 buyers (all 3 of them!)

Feels bad that NVIDIA release such a powerful 4K GPU and give it only 10GB. Anyone that defend this anti-consumer decision from NV is beyond reason.

At the very least, should have given it 12GB, 384 bus so it at least has the legs to power through games coming next year and onwards. But no, gotta squeeze that bit extra for higher margins.

I really do hope gamers punish Jensen for this decision this gen so next-gen, he won't pull this kind of stunt again.

Think about it, 2016 flagship gaming GPU: 11GB. 2018 flagship gaming GPU: 11GB. Late 2020 flagship gaming GPU: 10GB. (no, 3090 is ridiculous price hike). Meanwhile we are on the verge of one of the biggest leap in baseline gaming spec, to 16GB in new consoles.
 

SPBHM

Diamond Member
Sep 12, 2012
5,056
409
126
I do think that the vram being 16GB even on the 6800 is a strong point for AMD
if you think for a moment 8GB has been more or less mainstream for a long time, going for double it makes sense, also the consoles have around 16GB

given the consoles have very fast SSDs and plenty of ram I can see games really pushing on visual and a large pool of vram might be very important.
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
Yes and no. The 6800 series have got quite low raw bandwidth - beneath the consoles - and much more raw compute to keep fed. So they definitely need the cache to keep working quite well.

If we're positing a massive increase in vram usage, the cache on the 6800 series will inevitably lose at least some of its effectiveness. Doubly so if people start to do games with very fast transitions between totally different worlds.

The likely extent of any problem is probably testable in principle, whether anyone will I dunno. A lot of speculative effort. We'll see in a few years. Neither set of cards is automatically safely future proof though.
 

PhoBoChai

Member
Oct 10, 2017
119
389
106
I do think that the vram being 16GB even on the 6800 is a strong point for AMD
if you think for a moment 8GB has been more or less mainstream for a long time, going for double it makes sense, also the consoles have around 16GB

given the consoles have very fast SSDs and plenty of ram I can see games really pushing on visual and a large pool of vram might be very important.

Well, it's 2016 and mid-range GPUs like 1070 has 8GB. Even mainstream RX 480 had 8GB. Heck, even NV's mainstream offering the 1060 was 6GB, not far off.

Years later, 3070 is 8GB. What a joke really, and NV is asking gamers to sacrifice GPU longevity (keep playing with max settings for longer) just so they can boost their margins. This is after cheaping out of TSMC and going with budget Samsung. Really?!
 

sze5003

Lifer
Aug 18, 2012
14,183
625
126
But I thought everyone was saying you don't need more than 10gb of Vram for most games. I've been reading that in several posts all over. First it was oh it's just MS flight sim 2020, only one title. Doesn't look to be the case for future games.
 

KompuKare

Golden Member
Jul 28, 2009
1,016
934
136
Well, it's 2016 and mid-range GPUs like 1070 has 8GB. Even mainstream RX 480 had 8GB. Heck, even NV's mainstream offering the 1060 was 6GB, not far off.

Years later, 3070 is 8GB. What a joke really, and NV is asking gamers to sacrifice GPU longevity (keep playing with max settings for longer) just so they can boost their margins. This is after cheaping out of TSMC and going with budget Samsung. Really?!
Well, since they have around 80% of the market maybe they are more into built-in obsolescence and being stingy with VRAM is one way to do that.
Obviously a lower BOM also helps margins especially as Nvidia have once again gone for a low-volume X variant rather than generic GDDR.
Those 1.5GB GTX 580s didn't age that well. Neither did the 2GB 680 versus the 3GB 7970 (although I don't recall if the 4GB versions of the 680 aged much better).
 

maddie

Diamond Member
Jul 18, 2010
4,749
4,691
136
Well, it's 2016 and mid-range GPUs like 1070 has 8GB. Even mainstream RX 480 had 8GB. Heck, even NV's mainstream offering the 1060 was 6GB, not far off.

Years later, 3070 is 8GB. What a joke really, and NV is asking gamers to sacrifice GPU longevity (keep playing with max settings for longer) just so they can boost their margins. This is after cheaping out of TSMC and going with budget Samsung. Really?!
It looks like AMD is taunting Nvidia already.

"At 4K resolution using UltraHD textures, Godfall requires tremendous memory bandwidth to run smoothly. In this intricately detailed scene, we're using 4K by 4K texture sizes and 12 GB of graphics memory to play at 4K resolution.

The Infinity Cache on AMD's Radeon RX 6000 Series graphics cards runs Godfall at high frame rates with maximum settings enabled.
"
 

moinmoin

Diamond Member
Jun 1, 2017
4,955
7,673
136
HWU also pointed out here that 8GB VRAM is not enough for 1440p Ultra textures in Watch Dogs:


6GB isn't enough for 1440p High either.
Ouch.

Yeah, it's all on the low side considering consoles have 16GB for 4K respectively 10GB for 1440p in XSS' case (of course one has to subtract the memory used by the OS, but memory usage on consoles can and will also be optimized in a way it isn't on PC).
 

Mopetar

Diamond Member
Jan 31, 2011
7,848
6,015
136
GA100 has way more cache than GA102 and cache usually is very dense.

I guess I wasn't aware of that, but it does go back to my own point anyway.

I suppose we could try to look at the amounts of cache to see if it ends up making a difference once it's accounted for, but any remaining difference could just be down to Samsung vs TSMC so I'm not sure it's worth the trouble.