Discussion Ada/'Lovelace'? Next gen Nvidia gaming architecture speculation

Page 60 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Heartbreaker

Diamond Member
Apr 3, 2006
4,222
5,224
136
Am I the only one that's a bit disappointed given the original (months ago) expectations? Everyone was talking about how this is basically a two node or 1.5 nodes jump because of going from "bad" Samsung 8nm to TSMC 5/4nm, and we're basically seeing 1.6X raster performance or 1.8x RT performance while using 2.7 times the amount of transistors, an insane cooler and 450 watts?

DLSS 3 is obviously a joke.

Node jumps aren't as impressive as they used to be. Dennard scaling has fallen by the wayside.

I don't put much stock in rumors, so don't get my expectations up.
 
  • Like
Reactions: Leeea

Mopetar

Diamond Member
Jan 31, 2011
7,797
5,899
136
Disappointing is a bit of a stretch. This card will let you run 4K with all the bells and whistles without needing to compromise. DLSS isn't even needed in many titles even when ray tracing is turned on in order to ensure minimum acceptable frame rates.

I'm more concerned with cards like the 4080 12 GB which won't look as impressive for their asking price.
 

blckgrffn

Diamond Member
May 1, 2003
9,111
3,029
136
www.teamjuchems.com
Am I the only one that's a bit disappointed given the original (months ago) expectations? Everyone was talking about how this is basically a two node or 1.5 nodes jump because of going from "bad" Samsung 8nm to TSMC 5/4nm, and we're basically seeing 1.6X raster performance or 1.8x RT performance while using 2.7 times the amount of transistors, an insane cooler and 450 watts?

DLSS 3 is obviously a joke.

Disappointed? IMO this is a full generational jump in performance that we could have had most of with Ampere if there hadn't been a process SNAFU. So we got both a refinement and a big silicon technology jump.

If anything, I think we should be prepared for underwhelming releases going forward (even a 5090). This same jump won't happen again for a while.

Which makes it even more clear to me - you are no fool if you buy this day one. It's a 1080ti type card that will likely be serviceable (laugh if you want) until 2030 if Nvidia chooses to support it that long, which I believe they will.

Or you could buy a middling card now, another one in three years, and another 3 years after that and finally, six years and three cards later you'll have something measurably faster. Congrats.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
All this for $100 more than the original 3090, has buyers at this end of the market really well served.

You are ignoring the 3090Ti, which came out only 6 months ago, costs $2000, and is significantly slower than the 4090. People that bought these cards get nailed to the wall.

Not that I have any sympathy for somebody that spends 2K on a GPU, but it continues nVidia's "lets screw the customer to make a few more bucks" mindset. If the 3090Ti just replaced the 3090 at the same MSRP, it would not really be an issue. But nVidia priced it knowing that the 4090 was going to decimate it.
 
Last edited:

linkgoron

Platinum Member
Mar 9, 2005
2,286
810
136
Disappointed? IMO this is a full generational jump in performance that we could have had most of with Ampere if there hadn't been a process SNAFU. So we got both a refinement and a big silicon technology jump.
So you're basically agreeing with me, if you're saying that Ampere would have been way closer with TSMC 7nm. For example, Navi 21 was twice as fast as Navi 10 at 4k with 2.7 times the amount of transistors, on the same node (according to TPU, 6900xt vs 5700xt).
 

GodisanAtheist

Diamond Member
Nov 16, 2006
6,719
7,016
136
TPU has one of the most extensive game test suites and they show the 4090 capping out at about 45% faster than the 3090Ti @ 4K.

I suppose the increase in performance from review to review is going to depend much more heavily on the CPU in the test bench as well as the mix of older DX11 titles in the suite as well.

All told, 4090 looks like a hell of a card for the future, but sort of wasted on the games of today in anything shy of 4K+ resolution.

1665510193678.png
 

n0x1ous

Platinum Member
Sep 9, 2010
2,572
248
106
4090 should only be tested with top of the line cpus like 5800x3d or raptor lake/zen 4. Its even bottlenecked at 4k by mortal cpus
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,329
2,811
106
RTX 4090 is a very good card. Maybe the performance increase is not as high as I originally expected, but It's still a lot better than Ampere, and they can release a full version with 20-25% more performance.
I love the much improved perf/W.
RTX4080 12GB looks pretty expensive for 3080-3080Ti level of performance.
AMD will have to try their best. Tomorrow we will know some new info about RDNA3 according to Kelper_L2.
 

jpiniero

Lifer
Oct 1, 2010
14,510
5,159
136
Am I the only one that's a bit disappointed given the original (months ago) expectations? Everyone was talking about how this is basically a two node or 1.5 nodes jump because of going from "bad" Samsung 8nm to TSMC 5/4nm, and we're basically seeing 1.6X raster performance or 1.8x RT performance while using 2.7 times the amount of transistors, an insane cooler and 450 watts?

As I've been saying, I think it's a combination of memory bandwidth and maybe even CPU limitations at 4K.

Even then, from Compubase's review at 4K I counted 4 of the games were 80% faster than the highest Ampere card and Spider Man was 77%.
 
  • Like
Reactions: xpea and Saylick

lixlax

Member
Nov 6, 2014
183
150
116
Am I the only one that's a bit disappointed given the original (months ago) expectations? Everyone was talking about how this is basically a two node or 1.5 nodes jump because of going from "bad" Samsung 8nm to TSMC 5/4nm, and we're basically seeing 1.6X raster performance or 1.8x RT performance while using 2.7 times the amount of transistors, an insane cooler and 450 watts?

DLSS 3 is obviously a joke.
Thats exactly what I was thinking. The gains are a very decent generational leap, but if looking to the huge node jump, number of transistors, big jump in clockspeed, supposedly newer\better architecture it appears that it has some rather serious bottleneck somewhere. Perf per teraflop is much lower.
 

Hans Gruber

Platinum Member
Dec 23, 2006
2,092
1,065
136
The 3090Ti has a core clock of 1900-2000mhz and the 4090 can push 3ghz. So what are we looking at here with the 4090? A huge node jump and huge GPU core clock increases. 65-70% better performance. Is the TSMC stuff 4nm? I know that Nvidia is one tick ahead of AMD RDNA3. I think the RDNA3 is on 5nm.
 

blckgrffn

Diamond Member
May 1, 2003
9,111
3,029
136
www.teamjuchems.com
So you're basically agreeing with me, if you're saying that Ampere would have been way closer with TSMC 7nm. For example, Navi 21 was twice as fast as Navi 10 at 4k with 2.7 times the amount of transistors, on the same node (according to TPU, 6900xt vs 5700xt).

But that didn't happen. So this is what we get. Had it been only 20-30% raster increase I think we should have freaked out. If they had dedicated themselves to raster performance, maybe we would have gotten a full 100% raster increase.

Obviously, they didn't.

This is what I thought it would be.... so not disappointed.

Thats exactly what I was thinking. The gains are a very decent generational leap, but if looking to the huge node jump, number of transistors, big jump in clockspeed, supposedly newer\better architecture it appears that it has some rather serious bottleneck somewhere. Perf per teraflop is much lower.

It looks like there is more headroom with even faster CPUs. We'll see.
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,329
2,811
106
The 3090Ti has a core clock of 1900-2000mhz and the 4090 can push 3ghz. So what are we looking at here with the 4090? A huge node jump and huge GPU core clock increases. 65-70% better performance. Is the TSMC stuff 4nm? I know that Nvidia is one tick ahead of AMD RDNA3. I think the RDNA3 is on 5nm.
It's not 4nm, but custom 5nm from what I read on the internet.
 

Yosar

Member
Mar 28, 2019
28
136
76
Thats exactly what I was thinking. The gains are a very decent generational leap, but if looking to the huge node jump, number of transistors, big jump in clockspeed, supposedly newer\better architecture it appears that it has some rather serious bottleneck somewhere. Perf per teraflop is much lower.

Yep, exactly. Disappointing. Comparing power efficiency against 3090 Ti is very wrong because 3090 Ti was already freak of the nature. Against 3090 we have 55-60% in 4K for 100 W (25%-30%) more power, 2.7 times more transistors, all those additional thousands of shaders, new cache, much better process and the result is monster card with only 55-60% better performance at best?
I fail to see almost any progress here.
 
  • Like
Reactions: Tlh97

biostud

Lifer
Feb 27, 2003
18,194
4,674
136
Yep, exactly. Disappointing. Comparing power efficiency against 3090 Ti is very wrong because 3090 Ti was already freak of the nature. Against 3090 we have 55-60% in 4K for 100 W (25%-30%) more power, 2.7 times more transistors, all those additional thousands of shaders, new cache, much better process and the result is monster card with only 55-60% better performance at best?
I fail to see almost any progress here.
I think they are heavily invested in DLSS3 and raytracing, so developers and consumers can rejoice at the performance from a monopolistic ecosystem.
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
20,839
3,174
126
waiting on waterblocks for the 4090....

:)
 

xpea

Senior member
Feb 14, 2014
429
135
116
Yep, exactly. Disappointing. Comparing power efficiency against 3090 Ti is very wrong because 3090 Ti was already freak of the nature. Against 3090 we have 55-60% in 4K for 100 W (25%-30%) more power, 2.7 times more transistors, all those additional thousands of shaders, new cache, much better process and the result is monster card with only 55-60% better performance at best?
I fail to see almost any progress here.
Games don't use yet all features like SER, DMM and OMM. but still the performance is already excellent. Instead of looking at the half empty glass, maybe you should also look at the half full one...

RT perf (5 times faster than RDNA2)
FezXIrKUAAQ2JfD.png


And in rendering, it's another huge generational leap (6 times faster than RDNA2 is no slouch):

Fe2OxBAXoAAbUxn.jpg
 

exquisitechar

Senior member
Apr 18, 2017
655
862
136
Yep, exactly. Disappointing. Comparing power efficiency against 3090 Ti is very wrong because 3090 Ti was already freak of the nature. Against 3090 we have 55-60% in 4K for 100 W (25%-30%) more power, 2.7 times more transistors, all those additional thousands of shaders, new cache, much better process and the result is monster card with only 55-60% better performance at best?
I fail to see almost any progress here.
I don't know if we're just looking at different reviews, but in practice, it doesn't seem to be drawing 100 W more compared to the 3090.
 

jpiniero

Lifer
Oct 1, 2010
14,510
5,159
136
Games don't use yet all features like SER, DMM and OMM. but still the performance is already excellent. Instead of looking at the half empty glass, maybe you should also look at the half full one...

RT perf (5 times faster than RDNA2)

With console gamers demanding 60 fps, it's safe to say that developers aren't going to go that far with RT. Mostly Raster with some Light RT is going to continue to be the norm.
 
  • Like
Reactions: Tlh97 and Mopetar