• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Question Speculation: RDNA2 + CDNA Architectures thread

Page 179 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
As I wrote earlier, "My favorite leaker is David Wang, but he is mostly ignored by everyone".In his presentation from March 2020, well we can see this slide.When the rumors about some weird 128mb Infinity Cache started, most people have completely forgotten about this "green nonsense".Can we logically connect this green stuff with early Infinity Cache rumors, hm judge for yourself.


View attachment 32737

No.
 
It didn't for Renoir. Clocks as high as Matisse and the vega cores clock far higher than radeon 7.

Vega in the APUs almost seems like a pretty different design that's reused the same name. It's not too much different from RDNA2 still using the Navi name despite itself having some radically different design aspects than RDNA1.

APUs suffer the same density issue where the average density is useless given that the different parts of the SoC will themselves have drastically different density.

Hell, look at NVidia where the A100 is 1.4 times more dense than consumer Ampere. That's not just down to using TSMC vs. Samsung, but because the clock speeds would be lower allowing for a more dense design. You can pack the transistors more tightly when you know that each won't be generating as much heat.
 
Hell, look at NVidia where the A100 is 1.4 times more dense than consumer Ampere. That's not just down to using TSMC vs. Samsung, but because the clock speeds would be lower allowing for a more dense design. You can pack the transistors more tightly when you know that each won't be generating as much heat.

GA100 has way more cache than GA102 and cache usually is very dense.
 
Vega in the APUs almost seems like a pretty different design that's reused the same name. It's not too much different from RDNA2 still using the Navi name despite itself having some radically different design aspects than RDNA1.

APUs suffer the same density issue where the average density is useless given that the different parts of the SoC will themselves have drastically different density.

Hell, look at NVidia where the A100 is 1.4 times more dense than consumer Ampere. That's not just down to using TSMC vs. Samsung, but because the clock speeds would be lower allowing for a more dense design. You can pack the transistors more tightly when you know that each won't be generating as much heat.

The differences between Vega desktop and Vega APU is in the memory interfaces. The ISA between the two Vega's are the same as far as I know. And of course once you clock Vega down, it becomes super efficient, unlike the desktop part that was pushed to the max from the get go.
 
The differences between Vega desktop and Vega APU is in the memory interfaces. The ISA between the two Vega's are the same as far as I know. And of course once you clock Vega down, it becomes super efficient, unlike the desktop part that was pushed to the max from the get go.

That may be true for Raven and Picasso, altrought the freqs used there are similar to a Vega 56. Renoir seems to have some internal changes, the ROP count seems to be half of Picasso to start with.

Desktop Vega ran at way too high vcore to increase yields.
 
The differences between Vega desktop and Vega APU is in the memory interfaces. The ISA between the two Vega's are the same as far as I know. And of course once you clock Vega down, it becomes super efficient, unlike the desktop part that was pushed to the max from the get go.
ISA doesn't force all designs to be exactly similar at the physical circuitry level.
 
12GB of VRAM used at 4K, feels bad for the RTX 3080 buyers (all 3 of them!)
This is quite a crazy generation to be honest. either we are stuck at a memory too slow, or its a custom made, expensive solution. Or just an expensive solution.
Both teams have made quite some work and or compromises on memory this time around and I got a feeling AMD got out alive this time. 😀
 
12GB of VRAM used at 4K, feels bad for the RTX 3080 buyers (all 3 of them!)

Feels bad that NVIDIA release such a powerful 4K GPU and give it only 10GB. Anyone that defend this anti-consumer decision from NV is beyond reason.

At the very least, should have given it 12GB, 384 bus so it at least has the legs to power through games coming next year and onwards. But no, gotta squeeze that bit extra for higher margins.

I really do hope gamers punish Jensen for this decision this gen so next-gen, he won't pull this kind of stunt again.

Think about it, 2016 flagship gaming GPU: 11GB. 2018 flagship gaming GPU: 11GB. Late 2020 flagship gaming GPU: 10GB. (no, 3090 is ridiculous price hike). Meanwhile we are on the verge of one of the biggest leap in baseline gaming spec, to 16GB in new consoles.
 
I do think that the vram being 16GB even on the 6800 is a strong point for AMD
if you think for a moment 8GB has been more or less mainstream for a long time, going for double it makes sense, also the consoles have around 16GB

given the consoles have very fast SSDs and plenty of ram I can see games really pushing on visual and a large pool of vram might be very important.
 
Yes and no. The 6800 series have got quite low raw bandwidth - beneath the consoles - and much more raw compute to keep fed. So they definitely need the cache to keep working quite well.

If we're positing a massive increase in vram usage, the cache on the 6800 series will inevitably lose at least some of its effectiveness. Doubly so if people start to do games with very fast transitions between totally different worlds.

The likely extent of any problem is probably testable in principle, whether anyone will I dunno. A lot of speculative effort. We'll see in a few years. Neither set of cards is automatically safely future proof though.
 
I do think that the vram being 16GB even on the 6800 is a strong point for AMD
if you think for a moment 8GB has been more or less mainstream for a long time, going for double it makes sense, also the consoles have around 16GB

given the consoles have very fast SSDs and plenty of ram I can see games really pushing on visual and a large pool of vram might be very important.

Well, it's 2016 and mid-range GPUs like 1070 has 8GB. Even mainstream RX 480 had 8GB. Heck, even NV's mainstream offering the 1060 was 6GB, not far off.

Years later, 3070 is 8GB. What a joke really, and NV is asking gamers to sacrifice GPU longevity (keep playing with max settings for longer) just so they can boost their margins. This is after cheaping out of TSMC and going with budget Samsung. Really?!
 
But I thought everyone was saying you don't need more than 10gb of Vram for most games. I've been reading that in several posts all over. First it was oh it's just MS flight sim 2020, only one title. Doesn't look to be the case for future games.
 
Well, it's 2016 and mid-range GPUs like 1070 has 8GB. Even mainstream RX 480 had 8GB. Heck, even NV's mainstream offering the 1060 was 6GB, not far off.

Years later, 3070 is 8GB. What a joke really, and NV is asking gamers to sacrifice GPU longevity (keep playing with max settings for longer) just so they can boost their margins. This is after cheaping out of TSMC and going with budget Samsung. Really?!
Well, since they have around 80% of the market maybe they are more into built-in obsolescence and being stingy with VRAM is one way to do that.
Obviously a lower BOM also helps margins especially as Nvidia have once again gone for a low-volume X variant rather than generic GDDR.
Those 1.5GB GTX 580s didn't age that well. Neither did the 2GB 680 versus the 3GB 7970 (although I don't recall if the 4GB versions of the 680 aged much better).
 
Well, it's 2016 and mid-range GPUs like 1070 has 8GB. Even mainstream RX 480 had 8GB. Heck, even NV's mainstream offering the 1060 was 6GB, not far off.

Years later, 3070 is 8GB. What a joke really, and NV is asking gamers to sacrifice GPU longevity (keep playing with max settings for longer) just so they can boost their margins. This is after cheaping out of TSMC and going with budget Samsung. Really?!
It looks like AMD is taunting Nvidia already.

"At 4K resolution using UltraHD textures, Godfall requires tremendous memory bandwidth to run smoothly. In this intricately detailed scene, we're using 4K by 4K texture sizes and 12 GB of graphics memory to play at 4K resolution.

The Infinity Cache on AMD's Radeon RX 6000 Series graphics cards runs Godfall at high frame rates with maximum settings enabled.
"
 
HWU also pointed out here that 8GB VRAM is not enough for 1440p Ultra textures in Watch Dogs:


6GB isn't enough for 1440p High either.
Ouch.

Yeah, it's all on the low side considering consoles have 16GB for 4K respectively 10GB for 1440p in XSS' case (of course one has to subtract the memory used by the OS, but memory usage on consoles can and will also be optimized in a way it isn't on PC).
 
GA100 has way more cache than GA102 and cache usually is very dense.

I guess I wasn't aware of that, but it does go back to my own point anyway.

I suppose we could try to look at the amounts of cache to see if it ends up making a difference once it's accounted for, but any remaining difference could just be down to Samsung vs TSMC so I'm not sure it's worth the trouble.
 
Back
Top