Discussion Ada/'Lovelace'? Next gen Nvidia gaming architecture speculation

Page 48 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

beginner99

Diamond Member
Jun 2, 2009
5,208
1,580
136
Whether you render the fake frame with forward projection, or interpolation between the frames to hit 60 FPS, you still aren't responding any faster than 30 FPS.

Your claims of using the information from the future frame before it's rendered, is pure fantasy.

A simple example can demonstrate this dlss 3 stuff is all non-sense. It can't predict when in an FPS an enemy will run around the corner. Simple as that. Your stuck with your real frame rate for reacting to this action. same fro starcraft microing or whatever else depends on your opponents actions. Which means there is zero, zero benefit from using it in games that actually profit from it. In a single player there isn't really any benefit to 120 hz+ expect maybe less eyestrain.

In conlcusion this is a terrible, terrible marketing and potentially "cheating benchmarks" feature that solely exists to look good on paper without any actual practical benefit.
 

Saylick

Diamond Member
Sep 10, 2012
3,084
6,184
136
A simple example can demonstrate this dlss 3 stuff is all non-sense. It can't predict when in an FPS an enemy will run around the corner. Simple as that. Your stuck with your real frame rate for reacting to this action. same fro starcraft microing or whatever else depends on your opponents actions. Which means there is zero, zero benefit from using it in games that actually profit from it. In a single player there isn't really any benefit to 120 hz+ expect maybe less eyestrain.

In conlcusion this is a terrible, terrible marketing and potentially "cheating benchmarks" feature that solely exists to look good on paper without any actual practical benefit.
Good example. I can see DLSS 3 being useful only if you already have a base fps of 60+ and you have a 120+ Hz monitor, so you can use the full refresh rate. But let's be real, there is going to be a minimum base frame rate that you need to achieve, irrespective of frame generation, for a proper gaming experience, and it will vary on the type of game. For single player games, it's likely going to be 30 Hz. For multiplayer games, it's likely 60 Hz, and for competitive E-sports titles, it's likely 120 Hz. If your GPU can't generate at least those base frame rates, DLSS 3 Frame Generation ain't going to save you.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,222
5,224
136
It is not like most TVs. TVs have both a start and finish key frame.

This only has the start frame, and just guesses at what the finish frame is going to be.

This is very differant.

Actually they are using at least 2 preceding frames, because they need more than one to predict the direction of the future frame.

So they are both using at least 2 frames to interpolate/extrapolate another.

It's very similar. Though doing in between frames helps with the accuracy because you know the end state, but at the cost of additional lag.

Predicting a future frame from two past frames is harder, but should have much more limited impact on lag. But the important point, no matter how good it is your lag will still be at the real base frame rate, not the new fabricated one.

Even if it introduced Zero Lag (unlikely), and had no obvious artifacts (unlikely) it would still not improve reactivity like a real frame rate increase would.

If your game is running at 30 FPS, and you extrapolate it to 60 FPS, it would less responsive than if you just turned down some settings and ran a real 45 FPS.
 

Saylick

Diamond Member
Sep 10, 2012
3,084
6,184
136
Actually they are using at least 2 preceding frames, because they need more than one to predict the direction of the future frame.

So they are both using at least 2 frames to interpolate/extrapolate another.

It's very similar. Though doing in between frames helps with the accuracy because you know the end state, but at the cost of additional lag.

Predicting a future frame from two past frames is harder, but should have much more limited impact on lag. But the important point, no matter how good it is your lag will still be at the real base frame rate, not the new fabricated one.

Even if it introduced Zero Lag (unlikely), and had no obvious artifacts (unlikely) it would still not improve reactivity like a real frame rate increase would.

If your game is running at 30 FPS, and you extrapolate it to 60 FPS, it would less responsive than if you just turned down some settings and ran a real 45 FPS.
I surmise Nvidia will find a way or at least attempt to adjust the optical flow and motion vectors with your input in between key frames. So if your base frame rate was 10 fps boosted to 100 using frame gen, for the time between the stuttering mess key frames that is 10fps, your keystrokes and mouse inputs will be considered in the extrapolation/projection process. Will it work well? Probably not, especially if you're moving or panning around fast in all sorts of non-linear directions, but Nvidia will try and use marketing slides to convince the masses that they have god-tier technology.

Quite frankly, Nvidia might as well just introduce a new DLSS feature where it tries to guess/predict what inputs you might make based off of previous frames. To make it real slick, the AI model has been trained by having gaming professionals play each game for hundreds, if not thousands, of hours (for intern pay of course) so that you, the gamer, can take advantage of their skill! Imagine the possibilities. Just sit back, press W on your keyboard to move forward, and DLSS Skill Generation will do the rest by playing the game for you!
 
Last edited:

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
I surmise Nvidia will find a way or at least attempt to adjust the optical flow and motion vectors with your input in between key frames. So if your base frame rate was 10 fps boosted to 100 using frame gen, for the time between the stuttering mess key frames that is 10fps, your keystrokes and mouse inputs will be considered in the extrapolation/projection process. Will it work well? Probably not, especially if you're moving or panning around fast in all sorts of non-linear directions, but Nvidia will try and use marketing slides to convince the masses that they have god-tier technology.

This won't be the case. The GPU has no idea what key inputs are being pressed. The frame generation is based solely on the preceding frames, and the AI will determine what it thinks will come next. They have already posted the flow diagram for this.
 

Saylick

Diamond Member
Sep 10, 2012
3,084
6,184
136
This won't be the case. The GPU has no idea what key inputs are being pressed. The frame generation is based solely on the preceding frames, and the AI will determine what it thinks will come next. They have already posted the flow diagram for this.
The GPU currently doesn't know, but I don't see why Nvidia can't develop yet another PDK that forwards the user input as another variable into the frame generation process. It will just make implementation even more challenging because the game engine needs to pass on more and more info now.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
The GPU currently doesn't know, but I don't see why Nvidia can't develop yet another PDK that forwards the user input as another variable into the frame generation process.

Because that would add latency, and require each and every game to be customized. nVidia claims no increase in latency (we will see). But if the game engine is having to use CPU cycles to calculate movement to send to the GPU, that defeats the entire purpose of this tech.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,222
5,224
136
Nvidia is leaving a lot left of AD102 for 90ti this time. 2k cores more. 3090 was like 98% complete GA102 whereas 4090 is not even quite 90% of AD102

They were squeezed last time because 3080 shared the same die, and they had to try and make 3090 stand out.

This time 4090 has the die all to itself, allowing them to be a lot more relaxed, and leaving room for a future higher end and lower end models.
 

jpiniero

Lifer
Oct 1, 2010
14,510
5,159
136
Nvidia is leaving a lot left of AD102 for 90ti this time. 2k cores more. 3090 was like 98% complete GA102 whereas 4090 is not even quite 90% of AD102

Yep. Been saying that nVidia wanted the product stack to have definitive performance gaps unlike Ampere. I also think that given the 16 GB is $1199 there isn't room for a 4080 Ti.
 

Mopetar

Diamond Member
Jan 31, 2011
7,797
5,899
136
Eh people have been touting RTX as the "future" yet other than turning it on to see the difference in a game here and there I just turn it off to avoid the huge fps drop. Never once has bothered me.

Ray tracing does provide a more realistic image so it's hard to argue that we wouldn't want to use it in a world where we have as much computational power as we want, which is what we're slowly inching towards.

DLSS3 doesn't seem to add much of value in my opinion, particularly compared to previous versions which could actually extend the life of a card assuming it's still getting support four years down the road.

I can't see any reason to turn on DLSS3 other than to delude yourself into believing that you're getting a better frame rate than you actually are. In most cases you're going to be better off lowering settings to get a higher frame rate, because the inserted frames don't improve response in any way and the image quality will naturally be worse as a consequence of not being real.
 
  • Like
Reactions: Tlh97 and Leeea

Heartbreaker

Diamond Member
Apr 3, 2006
4,222
5,224
136
Yep. Been saying that nVidia wanted the product stack to have definitive performance gaps unlike Ampere. I also think that given the 16 GB is $1199 there isn't room for a 4080 Ti.

There was even less room on the die for a 3080 Ti, but we got that and 3080 12 GB between the 3080 and 3090 and then a 3090 Ti on top.

A total of 5 different configs/designations for GA102.

I think it's a fair certainty that there will be at least 3 configs of AD102, and 4080 Ti will probably be one of them.
 
  • Like
Reactions: Tlh97 and Leeea

jpiniero

Lifer
Oct 1, 2010
14,510
5,159
136
There was even less room on the die for a 3080 Ti, but we got that and 3080 12 GB between the 3080 and 3090 and then a 3090 Ti on top.

A total of 5 different configs/designations for GA102.

I think it's a fair certainty that there will be at least 3 configs of AD102, and 4080 Ti will probably be one of them.

That's what I mean. They are intending to avoid that with Ada.
 
  • Like
Reactions: ZGR and Leeea

Saylick

Diamond Member
Sep 10, 2012
3,084
6,184
136
Yep. Been saying that nVidia wanted the product stack to have definitive performance gaps unlike Ampere. I also think that given the 16 GB is $1199 there isn't room for a 4080 Ti.
Not at that price there ain't, but I do not see Nvidia intending to keep the price there permanently. Once the Ampere stock runs dry, both forms of the 4080 will come down in price, leaving a bigger pocket for a 4080 Ti to sit.
 

Ranulf

Platinum Member
Jul 18, 2001
2,331
1,139
136
I believe a poll regarding the new nvidia prices is a must, so we can see what we think, in numbers. I bet the negative votes will be increased by a DLSS3 factor.

Oh people can come up with numbers to prove anything. 21% (rt off) to 92% (rt on) of all people know that.
 
  • Like
Reactions: Tlh97 and Leeea

Timmah!

Golden Member
Jul 24, 2010
1,395
602
136
I am annoyed by the AIB designs. They are horrible, all thick as hell. FE is good enough being 3 slots high, but these clowns have to make them no less than 3,5 slots. Which in dual gpu setup will leave you with whopping half slot of free space between the cards.
 

jpiniero

Lifer
Oct 1, 2010
14,510
5,159
136
But you said no room for 4080 ti. While I'm saying plenty of room and there will almost certainly be a 4080 Ti.

nVidia very rarely cuts prices outside of mid cycle refreshes. I think what's going to happen is they would cut 4080s prices at the refresh cycle and replace the 4080 16 GB with the full AD103 at presumably $1199. Which you would think would be called 4080 Super. I suppose they could call it 4080 Ti.
 
  • Like
Reactions: Tlh97 and Leeea

Tup3x

Senior member
Dec 31, 2016
944
925
136
. . .

The driver happily lies to you:



Your driver settings are just suggestions, nothing more.
And that has exactly what to do with anisotropic not working on AMD in >DX9 games? On AMD side it most definitely doesn't work with DX12, in games where it's possible to force it with NVIDIA's driver, and officially it doesn't work in anything else than <=DX9. It's very hard to find hard facts about whether or not it works in DX11 and OpenGL. I would be extremely surprised if it didn't work in OpenGL games unless they for some reason removed the support. There are still DX12 (and DX11) games with seriously broken texture filtering settings or inability to select 16xAF in game and the default setting is something way lower. If I could fine solid evidence that it actually works in at least DX11 games, then that would be good to know. Reports like this do not exactly fill me with confidence, even for older games: https://community.amd.com/t5/drivers-software/anisotropic-filtering-under-adrenalin-2020/m-p/163415

Everyone knows how forcing MSAA is complicated in >DX8 games thanks to new rendering techniques. At least with NVIDIA there are some play room to fiddle with using NVIDIA inspector. If that doesn't work then down sampling is always an option. If in game UI causes issues, then post process AA is only option. We have had that conversation before.

In any case, I buy stuff that has the features (and performance) I need. If Intel can do that for example, I wouldn't hesitate buying their cards. But one thing is clear: I'll be skipping these cards. They are nuts when they are trying to sell a 295 mm² GPU with 192 bit bus for 1129 €. For comparison GA104 was 392 mm²... I just don't see how they can possible justify this asking price. I mean what the ....??? Also to call it RTX 4080 12 GB is just deceiving. RTX 4070 and anything below that are sure going to be underwhelming and over priced.
 

HurleyBird

Platinum Member
Apr 22, 2003
2,670
1,250
136
nVidia very rarely cuts prices outside of mid cycle refreshes.

If AMD solidly beats them in raster and appreciably undercuts them while using less energy, they might not have much choice.

If not, and the market rejects the new offerings at their current prices, then I could see a new "unofficial" lower MSRP, or the refresh comes sooner than expected.