Discussion Ada/'Lovelace'? Next gen Nvidia gaming architecture speculation

Page 52 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

deathBOB

Senior member
Dec 2, 2007
566
228
116
You don't have to buy it from us, buy from Nvidia themselves:
https://www.nvidia.com/en-us/geforce/news/reflex-low-latency-platform/

Let's see what 8ms extra latency does, according to Nvidia research:

View attachment 68284
https://www.nvidia.com/en-us/geforce/news/reflex-low-latency-platform/
Doesn’t tell us any about whether users will detect the difference in input latency over the visual difference in frame rate. There are going to be situations where the drawbacks of DLSS 3 outweigh the benefits (and why would you need it for things like Valorant or CS when you can easily hit high FPS), but most people will benefit from visual smoothness most of the time.
 

coercitiv

Diamond Member
Jan 24, 2014
6,185
11,851
136
Doesn’t tell us any about whether users will detect the difference in input latency over the visual difference in frame rate.
It's all in the Nvidia research article, they lay it out clearly that it's not about the player "detecting" the difference, Nvidia objectively measured hit accuracy and kill time.

When comparing a system with 12ms latency vs. one with 20ms latency, kill time increased from 1.35 seconds to 1.53 seconds. That's a difference of ~180ms. Think about that, 8ms latency difference leading to a delta of 180ms in target acquisition. They even explain why, the lower latency improves the series of iterative movements one needs to perform in order to aim correctly, which means the extra latency is added again and again, amplified for each extra corrective step we need to perform until the crosshair is on target.
Aiming involves a series of sub-movements - subconscious corrections based on the current position of the crosshair relative to the target’s location. At higher latencies, this feedback loop time is increased resulting in less precision. Additionally, at higher average latencies, the latency varies more, meaning that it’s harder for your body to predict and adapt to. The end result is pretty clear - high latency means less precision.

And that just aiming. There's more in there. Read the article, it's a powerful medicine for gaming preconceptions.

but most people will benefit from visual smoothness most of the time.
Only a few posts ago you were asking for blind tests, measurements. Reflect on that.
 

deathBOB

Senior member
Dec 2, 2007
566
228
116
It's all in the Nvidia research article, they lay it out clearly that it's not about the player "detecting" the difference, Nvidia objectively measured hit accuracy and kill time.

Exactly, Nvidia is measuring something different than user perception of improved frame rate versus latency.


Only a few posts ago you were asking for blind tests, measurements. Reflect on that.

Blind test of user experience as in 1) can the user detect when DLSS3 is on and 2) if so, does the user prefer DLSS 3 and the latency hit versus no DLSS 3? My assumption is that smoothness wins out versus latency and users prefer DLSS 3 on even with the latency hit.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
Exactly, Nvidia is measuring something different than user perception of improved frame rate versus latency.

Blind test of user experience as in 1) can the user detect when DLSS3 is on and 2) if so, does the user prefer DLSS 3 and the latency hit versus no DLSS 3? My assumption is that smoothness wins out versus latency and users prefer DLSS 3 on even with the latency hit.

If you had two systems. Both with 120fps output, one was real 120fps, and one was DLSS3, the difference to the user would be very obvious in any game where timing is important.

nVidia was measuring the ACTUAL impact of input latency.

I am not sure why you are trying to talk around the importance of input latency. In what world is a soap opera effect more important than having the game do what you want it to?
 

maddie

Diamond Member
Jul 18, 2010
4,738
4,667
136
Exactly, Nvidia is measuring something different than user perception of improved frame rate versus latency.




Blind test of user experience as in 1) can the user detect when DLSS3 is on and 2) if so, does the user prefer DLSS 3 and the latency hit versus no DLSS 3? My assumption is that smoothness wins out versus latency and users prefer DLSS 3 on even with the latency hit.
That's the point, it will not be smooth when you're making sudden movements in either your movement or your aim points. It projects a false frame further away from your intended move and when it corrects itself with a new true frame, the jump will be bigger as it has to adjust from the false position to the new, which will be a bigger delta.

Think about if you're running and suddenly shift to the right. The false frame will have you run a bit farther along before you seem to move right. Any sudden unpredictable change will show this as no one can predict your intentions in the next true frame. The false frame anticipates no vector change and extrapolates.
 
  • Like
Reactions: Leeea

CakeMonster

Golden Member
Nov 22, 2012
1,389
496
136
Anyone know enough about chip design to tell if its possible or realistic to add PCIE5 support for the fully enabled *102 chip (likely 4090Ti) in 2023?
 
  • Like
Reactions: Leeea

iR4boon

Junior Member
Sep 2, 2022
2
4
41
DLSS 3 with frame generation will only be viable/practical on 4k screens.

Lower than that, DLSS performance is just bad.
If you switch to higher DLSS quality, the latency penalty gets even worse.
 
  • Like
Reactions: Leeea

pj-

Senior member
May 5, 2015
481
249
116
That's the point, it will not be smooth when you're making sudden movements in either your movement or your aim points. It projects a false frame further away from your intended move and when it corrects itself with a new true frame, the jump will be bigger as it has to adjust from the false position to the new, which will be a bigger delta.

Think about if you're running and suddenly shift to the right. The false frame will have you run a bit farther along before you seem to move right. Any sudden unpredictable change will show this as no one can predict your intentions in the next true frame. The false frame anticipates no vector change and extrapolates.

Could it work similar to how reprojection in VR works? Or would DLSS3 not be able to poll input at a higher rate than the game itself?
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,226
5,228
136
Could it work similar to how reprojection in VR works? Or would DLSS3 not be able to poll input at a higher rate than the game itself?

In PC games, Polling the controls mean nothing. You don't even know what they do. They are different in every game.

In VR, moving your head means moving the view port an exact amount. So you can just shift the frame you do have a bit in that direction to help a late frame.

VR reprojection isn't without issues either.
 
Last edited:

maddie

Diamond Member
Jul 18, 2010
4,738
4,667
136
Could it work similar to how reprojection in VR works? Or would DLSS3 not be able to poll input at a higher rate than the game itself?
Won't you then have to render that new frame? If you do, then you defeat the purpose of a cheap computational frame.
 

Saylick

Diamond Member
Sep 10, 2012
3,125
6,294
136
I haven't had a chance to watch this in its entirety, but I figure I drop it in here now for everyone to digest. I'll add my 2c in a future edit below.

Edits:
- DF explains that the DLSS3 frame is inserted between two rendered frames.
- DF surmises that the push for higher and higher fps, even if some are AI generated, is to align with the push w/ high refresh rate monitors.
- There is a latency penalty for using DLSS3 FG (see below).
- Nvidia says there will be a "win some, lose some" scenario with DLSS3 FG, i.e. "there is no free lunch"
- DF acknowledges that while there are errors on the AI generated frames, it is very difficult if not impossible to notice at a high enough frame rate.

1664387752860.png
1664387773114.png
1664387862815.png
1664387899095.png
 
Last edited:

Leeea

Diamond Member
Apr 3, 2020
3,617
5,363
136
Won't you then have to render that new frame? If you do, then you defeat the purpose of a cheap computational frame.
It works by distorting the frame.

This reduces image quality, but the distorted frame will hopefully be less disorienting to the VR user.

Its a band aid, and not a good one.
 

Hitman928

Diamond Member
Apr 15, 2012
5,243
7,790
136
I haven't had a chance to watch this in its entirety, but I figure I drop it in here now for everyone to digest. I'll add my 2c in a future edit below.

Edits:
- DF explains that the DLSS3 frame is inserted between two rendered frames.
- DF surmises that the push for higher and higher fps, even if some are AI generated, is to align with the push w/ high refresh rate monitors.
- There is a latency penalty for using DLSS3 FG (see below).
- Nvidia says there will be a "win some, lose some" scenario with DLSS3 FG, i.e. "there is no free lunch"
- DF acknowledges that while there are errors on the AI generated frames, it is very difficult if not impossible to notice at a high enough frame rate.

View attachment 68365
View attachment 68366
View attachment 68367
View attachment 68368

Looking at the latency comparisons. . . no thanks! The Cyberpunk and Spider-man latency numbers are actually quite terrible.
 

deathBOB

Senior member
Dec 2, 2007
566
228
116
If you had two systems. Both with 120fps output, one was real 120fps, and one was DLSS3, the difference to the user would be very obvious in any game where timing is important.

nVidia was measuring the ACTUAL impact of input latency.

I am not sure why you are trying to talk around the importance of input latency. In what world is a soap opera effect more important than having the game do what you want it to?

Not a relevant comparison. If I can hit 120 fps why do I need DLSS3 in the first place? I’m talking about 60 without DLSS3 versus 120 with DLSS3. People are going to choose the latter because visual smoothness is more noticeable than input latency.

“Soap opera effect” isn’t relevant either. We’ve been trained to expect a certain frame rate for films and television, and the content is made for this frame rate. No such problem with games.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,226
5,228
136
I haven't had a chance to watch this in its entirety, but I figure I drop it in here now for everyone to digest. I'll add my 2c in a future edit below.

Edits:
- DF explains that the DLSS3 frame is inserted between two rendered frames.

When NVidia revealed the half frame lag, I suspected it was waiting for the next frame before doing the in between, despite all the other pre-release info implying it was a forward projection.

This makes it even MORE like TV motion smoothing.
 

Saylick

Diamond Member
Sep 10, 2012
3,125
6,294
136
When NVidia revealed the half frame lag, I suspected it was waiting for the next frame before doing the in between, despite all the other pre-release info implying it was a forward projection.

This makes it even MORE like TV motion smoothing.
Can always count on Nvidia reinventing things that have existing for a long time and then marketing it as something new, a la Apple, and then have the uninformed masses soak up the marketing.
 
  • Like
Reactions: Tlh97 and Leeea

Saylick

Diamond Member
Sep 10, 2012
3,125
6,294
136
I do think it is fair to say that the artifacting is going to be hard to notice when the frame is displayed for 8ms (120 fps) but c'mon... It's hard to say that the image quality doesn't take a hit, because it does.
1664390905083.png
 

exquisitechar

Senior member
Apr 18, 2017
657
871
136
I do think it is fair to say that the artifacting is going to be hard to notice when the frame is displayed for 8ms (120 fps) but c'mon... It's hard to say that the image quality doesn't take a hit, because it does.
View attachment 68374
Yeah, I think my initial assessment of DLSS3 was correct. I see absolutely no reason to use it over DLSS2.x or native. Also, with the launch of Ada/DLSS3, Digital Foundry has confirmed beyond all doubt that they're, uhm, rather partial to Nvidia. Can't take them seriously at all.
 

Tup3x

Senior member
Dec 31, 2016
959
942
136
I do think it is fair to say that the artifacting is going to be hard to notice when the frame is displayed for 8ms (120 fps) but c'mon... It's hard to say that the image quality doesn't take a hit, because it does.
View attachment 68374
That's obviously the worst case scenario. One could easily find a frame that would be pretty much perfect. It will depend on scene by scene. Also it's is debatable if one can actually notice that in motion. Errors are more sever in extremely fast motion and in those cases you can't really notice them. Also the improved smoothness probably makes the game look more enjoyable. In any case, this is probably something that will vary greatly between different people.
Looking at the latency comparisons. . . no thanks! The Cyberpunk and Spider-man latency numbers are actually quite terrible.
Still much less than native without reflex so that's only valid if you compare against same NVIDIA card with DLSS enabled. Even in worst case it is basically identical to native with reflex on. How that will actually feel is another thing entirely though.

ADHD multiplayer shooter guys probably shouldn't use this though. In MS Flight Simulator this should work really, really well though.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,226
5,228
136
Still much less than native without reflex so that's only valid if you compare against same NVIDIA card with DLSS enabled.

IMO the only fair comparison is DLSS 3 with Frame generation enabled vs disabled on the same card.

Since the whole point of the comparison is to see how much latency frame generation adds.
 
  • Like
Reactions: Mopetar