Discussion Ada/'Lovelace'? Next gen Nvidia gaming architecture speculation

Page 51 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Heartbreaker

Diamond Member
Apr 3, 2006
4,227
5,228
136
I don't think AMD has as much inventory. AMD just announced price cuts across the board. A 6800XT is now $599 and 6900XT is $699.


Read the fine print on the slide. That isn't AMD announcing a price cut, it's them reporting the street prices they found.

Also, you do know that supply and demand would indicate someone selling under MSRP, has excess supply.
 
  • Like
Reactions: scineram

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
It's not totally useless, but it's being presented in a very misleading manner by NVidia that treats it like real frame rate gain, which it isn't, and yes they are doing it for marketing.

So the best thing is for them to fail and them giving up on DLSS3 because competitors like AMD(and the big one, Intel) will do it better without the fake frames part.

I do think we're in a transition phase where card prices will become more competitive, but not there fully yet.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,227
5,228
136
So the best thing is for them to fail and them giving up on DLSS3 because competitors like AMD(and the big one, Intel) will do it better without the fake frames part.

I don't think NVidia will give up on DLSS 3 frame injection any time soon. But we have to hope reviewers see through the marketing smokescreen, and attempts to pretend it's a real framerate get universally condemned.

I do think we're in a transition phase where card prices will become more competitive, but not there fully yet.

If by competitive, you mean significantly lower pricing, I wouldn't count on it. Chips are just getting more expensive to make. We are many years past the point where every new node gave 50-100% more transistors for free.

Now transistor costs have flat lined, so when they double transistors, it costs almost double to to produce the chip.

Screen-Shot-2017_2D00_10_2D00_07-at-2.28.50-PM.jpg



If they brute force chips by throwing more bulk transistors at performance, it's going to be more expensive to produce them.
 
  • Like
Reactions: igor_kavinski

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
If by competitive, you mean significantly lower pricing, I wouldn't count on it. Chips are just getting more expensive to make. We are many years past the point where every new node gave 50-100% more transistors for free.

Competitive as in they won't be able to price it like Ada anymore. Innovation will still allow costs to be miminized.

Yes I think in general the cost reduction is over. Actually it was over back in 2012, and getting worse.

I don't know if reviewers will do that. Most of the modern reviewers seem to be no more than talking heads and grabbing headlines. All of them need to be taken down a notch. Even Anandtech when Anand left and someone replaced him(won't name who).
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
Whether you render the fake frame with forward projection, or interpolation between the frames to hit 60 FPS, you still aren't responding any faster than 30 FPS.

Your claims of using the information from the future frame before it's rendered, is pure fantasy.

You may as well argue we are going to read the back propagating chroniton particles to draw the fake frame that hasn't been rendered yet. ;)

Back in reality, fake frames add nothing. They are motion smoothing like on TVs being sold as real frames.

TV's do this. SteamVR does this.

But NVidia adds fake frames and you think it's the next big thing...
The first thing is the game's tick rate and the frame rate are decoupled. It's the game's tick rate (how fast it works out the state of the world) that sets how fast it responds to input. That is not necessarily the frame rate. In fact for any online game it's really the servers tick rate that matters and that's normally 30 or 60 ticks/second.

So what we have with the guessed frame in DLSS 3 is an in between frame the user sees and responds too, and what you are claiming is that in between frame is useless because it was made up. Firstly because the server tick is 60 (i.e. it only tells you what everyone else is doing 60 times a second) even say there was no DLSS but you had 300fps all those frames after the last tick are effectively made up as none of them know about any changes (e.g. enemy player changes direction) until the next server tick. So what DLSS 3 is doing is really just what happens already.

Secondly the inbetween frame is only useless if it was incorrect, and it won't be nearly all the time. For example in an fps you are normally tracking the gun sight across until it ends up over the target and you pull the trigger. In DLSS 3 that made up frame will be correct as it's a steady tracking motion so easly for the AI to have guessed the content of that frame correctly. If you will pull the trigger when the gun sight crosses the target even if it does it on the AI generated DLSS 3 frame you will score a hit, just as you would score a hit with the same number of fps and no DLSS - both ways of drawing the frame would have shown you the same thing at the same time, and you would have pressed the mouse button at the same instant for the same result.
 

coercitiv

Diamond Member
Jan 24, 2014
6,204
11,909
136
Secondly the inbetween frame is only useless if it was incorrect, and it won't be nearly all the time. For example in an fps you are normally tracking the gun sight across until it ends up over the target and you pull the trigger. In DLSS 3 that made up frame will be correct as it's a steady tracking motion so easly for the AI to have guessed the content of that frame correctly.
Moments later you're waiting for the next target around a corner. You can hear the footsteps. The AI model has no prior frame reference, so it will keep drawing "empty" frames until a true render shows the enemy. The latency introduced by DLSS 3 will be added on top of the game's "tick rate". Nvidia call it peeker's advantage.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,227
5,228
136
The first thing is the game's tick rate and the frame rate are decoupled. It's the game's tick rate (how fast it works out the state of the world) that sets how fast it responds to input. That is not necessarily the frame rate. In fact for any online game it's really the servers tick rate that matters and that's normally 30 or 60 ticks/second.

So what we have with the guessed frame in DLSS 3 is an in between frame the user sees and responds too, and what you are claiming is that in between frame is useless because it was made up. Firstly because the server tick is 60 (i.e. it only tells you what everyone else is doing 60 times a second) even say there was no DLSS but you had 300fps all those frames after the last tick are effectively made up as none of them know about any changes (e.g. enemy player changes direction) until the next server tick. So what DLSS 3 is doing is really just what happens already.

Secondly the inbetween frame is only useless if it was incorrect, and it won't be nearly all the time. For example in an fps you are normally tracking the gun sight across until it ends up over the target and you pull the trigger. In DLSS 3 that made up frame will be correct as it's a steady tracking motion so easly for the AI to have guessed the content of that frame correctly. If you will pull the trigger when the gun sight crosses the target even if it does it on the AI generated DLSS 3 frame you will score a hit, just as you would score a hit with the same number of fps and no DLSS - both ways of drawing the frame would have shown you the same thing at the same time, and you would have pressed the mouse button at the same instant for the same result.

I don't really see much merit in this argument even if you had responded before new information was revealed.

But since it was revealed that DLSS 3 fake frame insertion adds one half frame of latency.

By the time you see that generated fake frame with DLSS 3 ON, you would have seen the REAL frame with DLSS 3 fake frames OFF.

It should be obvious that it's much better to get the REAL frame arriving on time instead of the predicted in between fake frame, arriving then and delaying the real one.

DLSS 3 fake frames add latency and generates artifacts. This is essentially just like TV motion smoothing. Useless for any gaming that is depended on quick reaction. It clearly makes thing worse for these games.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
The first thing is the game's tick rate and the frame rate are decoupled. It's the game's tick rate (how fast it works out the state of the world) that sets how fast it responds to input. That is not necessarily the frame rate. In fact for any online game it's really the servers tick rate that matters and that's normally 30 or 60 ticks/second.

So what we have with the guessed frame in DLSS 3 is an in between frame the user sees and responds too, and what you are claiming is that in between frame is useless because it was made up. Firstly because the server tick is 60 (i.e. it only tells you what everyone else is doing 60 times a second) even say there was no DLSS but you had 300fps all those frames after the last tick are effectively made up as none of them know about any changes (e.g. enemy player changes direction) until the next server tick. So what DLSS 3 is doing is really just what happens already.

Secondly the inbetween frame is only useless if it was incorrect, and it won't be nearly all the time. For example in an fps you are normally tracking the gun sight across until it ends up over the target and you pull the trigger. In DLSS 3 that made up frame will be correct as it's a steady tracking motion so easly for the AI to have guessed the content of that frame correctly. If you will pull the trigger when the gun sight crosses the target even if it does it on the AI generated DLSS 3 frame you will score a hit, just as you would score a hit with the same number of fps and no DLSS - both ways of drawing the frame would have shown you the same thing at the same time, and you would have pressed the mouse button at the same instant for the same result.

In addition to the above post from @guidryp , what you state isn't entirely true.

There is a reason twitch shooters want as many FPS as possible. Its to reduce input lag. You are stating that a users inputs are sent to the server for processing, which is not how online games work. The input lag in that case would be obnoxiously bad, and the load on servers would be significantly higher. The communication to the server is typically a combinations of current location, and the speed/direction of movement. Plus additional things like the player is using some ability, firing a weapon, etc.

INPUT lag is entirely a local thing. Its the latency between entering some sort in of input, and seeing the results of that input on screen. DLSS3 ADDS latency. Getting higher frame rates with real frames REDUCES latency.
 

jpiniero

Lifer
Oct 1, 2010
14,600
5,221
136

Apparently somebody in Hong Kong bought a Gigabyte 4090. No drivers though so no benchmarks.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Moments later you're waiting for the next target around a corner. You can hear the footsteps. The AI model has no prior frame reference, so it will keep drawing "empty" frames until a true render shows the enemy. The latency introduced by DLSS 3 will be added on top of the game's "tick rate". Nvidia call it peeker's advantage.

Imagine how bad the comparison will be between DLSS3 and DLSS2 with Reflex. This is going to create plenty of gamers with smashed keyboards in frustration and blaming the computer for their loss - in this case it'll be true.

Nvidia - The way you are meant to be killed.
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
In addition to the above post from @guidryp , what you state isn't entirely true.

There is a reason twitch shooters want as many FPS as possible. Its to reduce input lag. You are stating that a users inputs are sent to the server for processing, which is not how online games work. The input lag in that case would be obnoxiously bad, and the load on servers would be significantly higher. The communication to the server is typically a combinations of current location, and the speed/direction of movement. Plus additional things like the player is using some ability, firing a weapon, etc.

INPUT lag is entirely a local thing. Its the latency between entering some sort in of input, and seeing the results of that input on screen. DLSS3 ADDS latency. Getting higher frame rates with real frames REDUCES latency.
Yes, I completely agree when the AI doesn't guess correctly what to draw you will get 1/2 a frame of output lag (not input, there is no delay in input reaching the game engine). However it's only when it doesn't guess correctly and like I said in the key points it mostly will get it right. Even that change in direction, while the visual effect of it was delayed slightly you had no lag as you came up to the change in direction (and that does need to go to the server as it is the abitier of truth, we have all seen us rubber band back when we lag and it disagrees). That is much more important, e.g. you are doing a jump, the fact there is a tiny bit of lag after the jump is much less important then the lag in doing the jump - i.e. DLSS 3 did not stop you timing your jump correctly, there was no lag then.

Anyway proof will be in the real testing, we will see then. All I am saying is it's far from proven that the AI frames will be a problem.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,227
5,228
136
Yes, I completely agree when the AI doesn't guess correctly what to draw you will get 1/2 a frame of output lag (not input, there is no delay in input reaching the game engine). However it's only when it doesn't guess correctly and like I said in the key points it mostly will get it right.


It ALWAYS has a 1/2 frame of lag. Being right or wrong has nothing to do with it, that just the amount of Latency it adds according to a Q&A with NVidia.

Real frame arrive 1/2 frame late, and instead you get a predicted frame. Even if it's the correct prediction, it's still only half way in between the last frame and where the next real frame is going to be. The next real frame that you would already see if you had Fake Frame creation off.
 

maddie

Diamond Member
Jul 18, 2010
4,740
4,674
136
It ALWAYS has a 1/2 frame of lag. Being right or wrong has nothing to do with it, that just the amount of Latency it adds according to a Q&A with NVidia.

Real frame arrive 1/2 frame late, and instead you get a predicted frame. Even if it's the correct prediction, it's still only half way in between the last frame and where the next real frame is going to be. The next real frame that you would already see if you had Fake Frame creation off.
If the imagined frame continues with motion farther along the previous path, won't the correction, which could be perceived as lag, be even worse visually when displayed? You should see jitters/shifts when/where fast movement occurs, as a bigger displacement needs to be corrected.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
If the imagined frame continues with motion farther along the previous path, won't the correction, which could be perceived as lag, be even worse visually when displayed? You should see jitters/shifts when/where fast movement occurs, as a bigger displacement needs to be corrected.

I fully expect there to be jitter, depending on the game. I am sure many of the reviewers will dive into this very thing. I don't expect the visuals will be as bad as no v-sync where you end up with half frames being drawn. But its probably going to be noticeable.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
Yes, I completely agree when the AI doesn't guess correctly what to draw you will get 1/2 a frame of output lag (not input, there is no delay in input reaching the game engine). However it's only when it doesn't guess correctly and like I said in the key points it mostly will get it right. Even that change in direction, while the visual effect of it was delayed slightly you had no lag as you came up to the change in direction (and that does need to go to the server as it is the abitier of truth, we have all seen us rubber band back when we lag and it disagrees). That is much more important, e.g. you are doing a jump, the fact there is a tiny bit of lag after the jump is much less important then the lag in doing the jump - i.e. DLSS 3 did not stop you timing your jump correctly, there was no lag then.

Anyway proof will be in the real testing, we will see then. All I am saying is it's far from proven that the AI frames will be a problem.

As noted above, the 1/2 frame of latency is always there. That's how long it takes the GPU to generate the frame. It doesn't matter if it is correct or incorrect. This is from nVidia themselves.
 

deathBOB

Senior member
Dec 2, 2007
566
228
116
Not buying it. There aren’t a lot of scenarios where 8ish ms of input latency is detectable to the user. But even normal people see and feel the impact of going from 60 to 120 fps.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
Not buying it. There aren’t a lot of scenarios where 8ish ms of input latency is detectable to the user. But even normal people see and feel the impact of going from 60 to 120 fps.

What arent you buying?
 

Paul98

Diamond Member
Jan 31, 2010
3,732
199
106
That latency should be pretty obvious on anything active, sort of like how I used to go for 90+FPS in games even though I only had a 60hz monitor simply because it was more responsive.
 
  • Like
Reactions: Stuka87

Heartbreaker

Diamond Member
Apr 3, 2006
4,227
5,228
136
Not buying it. There aren’t a lot of scenarios where 8ish ms of input latency is detectable to the user. But even normal people see and feel the impact of going from 60 to 120 fps.

Sounds like you are buying NVidia Marketing copy.

Have you ever considered why people feel the impact of going from 60 to 120 FPS?

It's at least partly because Latency improves. Inserting fake frames isn't real 120 FPS, and it does the opposite. Instead of decreasing latency, it increases.
 

coercitiv

Diamond Member
Jan 24, 2014
6,204
11,909
136
Not buying it. There aren’t a lot of scenarios where 8ish ms of input latency is detectable to the user. But even normal people see and feel the impact of going from 60 to 120 fps.
You don't have to buy it from us, buy from Nvidia themselves:
https://www.nvidia.com/en-us/geforce/news/reflex-low-latency-platform/

Let's see what 8ms extra latency does, according to Nvidia research:
NVIDIA Research has found that even minor differences in system latency -- 12ms vs. 20ms, can have a significant difference in aiming performance. In fact, the average difference in aiming task completion (the time it takes to acquire and shoot a target) between a 12ms and 20ms PCs was measured to be 182ms - that is about 22 times the system latency difference. To put that into perspective, given the same target difficulty, in a 128 tick Valorant or CS:GO server, your shots will land on target an average of 23 ticks earlier on the 12ms PC setup. Yet most gamers play on systems with 50-100ms of system latency!
1664302877887.png
 

deathBOB

Senior member
Dec 2, 2007
566
228
116
Sounds like you are buying NVidia Marketing copy.

Have you ever considered why people feel the impact of going from 60 to 120 FPS?

It's at least partly because Latency improves. Inserting fake frames isn't real 120 FPS, and it does the opposite. Instead of decreasing latency, it increases.

Eyes are sensitive and the body is relatively slow. Human reaction times are typically more than 100ms. Add in the fact that most gaming where input times matter is still subject to network and server latency.

The improvement in visual smoothness clearly dominates the effect on input. You would have to show me some blind testing to convince me otherwise.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,227
5,228
136
Eyes are sensitive and the body is relatively slow. Human reaction times are typically more than 100ms. Add in the fact that most gaming where input times matter is still subject to network and server latency.

The improvement in visual smoothness clearly dominates the effect on input. You would have to show me some blind testing to convince me otherwise.

Feel is about latency/lag. When I first switched from CRT to LCD way back in 2006, I immediately felt that my controls were sludgy, due to the extra lag.

At that point I hadn't even read about lag or anything like that. I just sensed immediately that something was wrong.

I couldn't have told you anything was wrong just watching the screen without using the controls.

When latency changes, you feel it in the controls, so when latency increases it feels slow and wrong, and when latency decreases it feels snappy and right.