Discussion Ada/'Lovelace'? Next gen Nvidia gaming architecture speculation

Page 44 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

jpiniero

Lifer
Oct 1, 2010
14,591
5,214
136
Yeah, it's really strange that the scaling isn't there, which debunks all of the Ada Lovelace Twitter leakers. FWIW, those leaks were based on TSE scores, which in theory should reflect gaming workloads, but perhaps there's a disconnect this time around. We'll just have to wait for proper reviews to know at this point. I won't discount Lovelace just yet...

I think the TSE scores were estimates based upon the specs given.
 

Revolution 11

Senior member
Jun 2, 2011
952
79
91
Hard to argue that if AMD performance is basically identical or better in some cases. If AMD "sucks" but still comes out on top what does that say about NVidia? There aren't too many people who can handle that much cognitive dissonance.

At the end of the day it's their money though and if them ignoring a brand means more supply or cheaper cards for me, I'm not going to argue, especially after the last few years.
I will strongly disagree. There are a lot of people who can handle a lot of cognitive dissonance by creating alternative facts and selectively accepting information that preserves their preexisting beliefs. Just look at religion, politics, etc, etc. It is all over the place and it always has been.

As you said, it is a (short-term) positive for those who do not fall for the marketing traps. In the long-term, it is still better for consumers to be more strictly focused on tangible numbers like price/performance/power and not "mindshare" so that companies produce better products at cheaper prices.
 

Furious_Styles

Senior member
Jan 17, 2019
492
228
116
I will strongly disagree. There are a lot of people who can handle a lot of cognitive dissonance by creating alternative facts and selectively accepting information that preserves their preexisting beliefs. Just look at religion, politics, etc, etc. It is all over the place and it always has been.

As you said, it is a (short-term) positive for those who do not fall for the marketing traps. In the long-term, it is still better for consumers to be more strictly focused on tangible numbers like price/performance/power and not "mindshare" so that companies produce better products at cheaper prices.
Yeah there's always the fanboys. They are useless though and not worth listening to. See intel or AMD CPU threads, they always crawl out of the woodwork to cheer on their team.
 

Tup3x

Senior member
Dec 31, 2016
963
948
136
Hard to argue that if AMD performance is basically identical or better in some cases. If AMD "sucks" but still comes out on top what does that say about NVidia? There aren't too many people who can handle that much cognitive dissonance.

At the end of the day it's their money though and if them ignoring a brand means more supply or cheaper cards for me, I'm not going to argue, especially after the last few years.
As long as I can't force anisotropic in some crappy DX11/10 games that I have, that brand is a non-option for me unfortunately. Nothing brand related but I just do not like looking at blurry textures.
 
  • Like
Reactions: igor_kavinski

Saylick

Diamond Member
Sep 10, 2012
3,148
6,364
136

jpiniero

Lifer
Oct 1, 2010
14,591
5,214
136
What I really don’t like, is that all the major performance enhancements seems to be in their proprietary technology. When you look at the “Todays games” uplift, it is only 50-70% compared to the major uplift with DLSS3.

Yeah, but even with that, as I mentioned the 4090 should be close to 2x even in raster games without DLSS3.
 

blckgrffn

Diamond Member
May 1, 2003
9,126
3,065
136
www.teamjuchems.com
What I really don’t like, is that all the major performance enhancements seems to be in their proprietary technology. When you look at the “Todays games” uplift, it is only 50-70% compared to the major uplift with DLSS3.

All that die space seems be spent on solutions looking for problems, imo. More cache would have likely helped everything, all the time. I know there is diminishing returns, but it really looks like more bandwidth would help scaling of existing hardware and cache hits create more bandwidth, effectively.
 
  • Like
Reactions: Tlh97 and Saylick

Revolution 11

Senior member
Jun 2, 2011
952
79
91
All that die space seems be spent on solutions looking for problems, imo. More cache would have likely helped everything, all the time. I know there is diminishing returns, but it really looks like more bandwidth would help scaling of existing hardware and cache hits create more bandwidth, effectively.
But but how is Jensen supposed to sell a "premium" feature for megabucks if all the die space is spent on regular cache to boost rasterization performance? I can't even disagree with the approach, specialized hardware acceleraters might be the long-term path that everyone is going to go down.

But Nvidia is determined to keep the performance crown at any cost for that halo status. If the last 10% of performance was given up, there would be a lot of power savings, which would reduce the BOM and allow for lower prices for AIBs and consumers. Nvidia does not want this, a higher price means a more "premium" product to the fanbase.
 
Jul 27, 2020
16,288
10,323
106
Some thoughts on RTX 4000's presumable lack of large cache:

Their design was already too far along to incorporate 3D V-cache.

They can't risk on-die large cache. Defect in that cache would render whole die as unusable, affecting yields.

AMD probably has special deal in place for 3D V-cache exclusivity.

TSMC may have offered Nvidia some relatively small pie of the V-cache wafer allocation, which Nvidia declined, deeming it not worth the cost and trouble to incorporate into their design.

Nvidia is the GPU leader so they didn't pursue the cache strategy aggressively for the consumer parts. They know it will sell well regardless.
 
  • Like
Reactions: Leeea

Heartbreaker

Diamond Member
Apr 3, 2006
4,227
5,228
136
What I really don’t like, is that all the major performance enhancements seems to be in their proprietary technology. When you look at the “Todays games” uplift, it is only 50-70% compared to the major uplift with DLSS3.

I'm not that concerned about proprietary. I'm more concerned that about the DLSS 3 inserting fake frames.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,227
5,228
136
Yeah, but even with that, as I mentioned the 4090 should be close to 2x even in raster games without DLSS3.

Yeah, has about double everything, except memory BW which is just about the same. They do imply a lot of L2 cache though, but with all that compute power, it if getting 60%, that big cache, clearly isn't big enough to account for the relatively weak memory bandwidth.

AD102-GPU-DIAGRAM.png
 

Saylick

Diamond Member
Sep 10, 2012
3,148
6,364
136
Yeah, has about double everything, except memory BW which is just about the same. They do imply a lot of L2 cache though, but with all that compute power, it if getting 60%, that big cache, clearly isn't big enough to account for the relatively weak memory bandwidth.

AD102-GPU-DIAGRAM.png
The size of that blue rectangle means jacksh*t in relation to the L2 size unless we hear it from Nvidia's mouth.

Until then:
1663795484695.png
 

Saylick

Diamond Member
Sep 10, 2012
3,148
6,364
136
Chips and Cheese has some musings on both Ada Lovelace and RDNA 3. Most of it is a recap of what we know, but some is educated guessing. Pretty good insight in my opinion:
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
DLSS3 sounds terrible. I thought they made a genuine advancement with DLSS2. Now they pull this nonsense.

Yeah, but even with that, as I mentioned the 4090 should be close to 2x even in raster games without DLSS3.

Don't know why you or anyone is surprised.

Rule of thumb: Each major part in a GPU is responsible for 1/3rd of the performance. That is, 1/3 to shaders, 1/3 to fillrate, 1/3 to memory bandwidth.

Also expecting the performance/shader to increase each generation is weird. GPUs aren't like CPUs and scale extremely well with extra compute units. They don't need focus on per shader performance because uhhh, you can just put more of them.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
The 10 series provided good value for the money at the release time. Turing was faster, but it was also proportionately more expensive so a lot of nvidia gamers decided to wait till Ampere. 30 series was supposed to bring performance/$ value back to sanity, and it did, the 30 series were priced right, but then the mining boom happened

The 10 series didn't provide good value. When the GTX 1080 first came out some criticized how they took what should have been the GTX 1070 but changed the name so they could charge more. And this was right before the mining boom.

The pricing has increased so much since then people think the 10 series time was a godsend. Imagine when things get so bad when people reminisce Ada generation as good value!

It will be good for their 4050 or even 4030 part, especially the 1000+ laptop models released with mobile 4050 :D

Cheap liars are still liars. :)
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Source please?

You'll just have to believe me. I've been reading reviews since the late 90's and with big reviews I'd do analysis on them to see scaling and everything.

Approach it logically. A balanced system and game would take advantage of each of the major features equally. 20-30% gains can be had by doubling memory bandwidth. Same with fillrate, and same with shader firepower.

If one generation changes the balance too much, that means there's optimization to be had. Imagine a game/video card where doubling memory bandwidth increased performance by 60-80%. Then you know somewhere in the game and/or GPU, there's a serious bottleneck when it comes to memory. This also means you are wasting resources by having too much shader and fillrate, because you could have had much better perf/$, perf/mm and perf/W ratio.

That's why it's called a Rule of Thumb though. All sorts of real-world analysis has to be done to get the actual value. Game development changes, and quality of the management for the GPU team changes.

Efficiency is still a good thing. Less shader units equals less die space and less heat.

There's no such thing as free. To get more performant shader units, generally you need more resources. Optimization is what you need to do better than that, but that requires innovation, ideas and time which doesn't always happen.
 
Last edited:

coercitiv

Diamond Member
Jan 24, 2014
6,199
11,895
136
The size of that blue rectangle means jacksh*t in relation to the L2 size unless we hear it from Nvidia's mouth.
We finally have something:

Info on the L2 cache for Ada Lovelace / AD102:
  • The full AD102 GPU includes 98,304 KB of L2 cache (a 16x increase over the 6,144 KB in a full GA102).
  • The GeForce RTX 4090 includes 73,728 KB of L2 cache (a 12x increase over the 6144 KB in the GeForce RTX 3090 Ti).