• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Question DLSS 2.0

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.


Senior member
Apr 3, 2006
DLSS 2.0 is using Convolution and Inferencing to upscale images and do AA by using a Machine Learning trained model (Nvidia calls it Convolutional Autoencoder ) , that is exactly what they displayed in the DirectML video above.

They clearly say in the video that they are using a NVIDIA Model called "Autoencoder" (time of video 19.43) for their Super Resolution Neural Network. So either you havent seen the video or you talking about something else because what they showcased was exactly what DLSS 2.0 does.
Convolution is a standard mathematical operation. That's basically like saying: "They both use math, so they are the same".

I watched the video, and I also watched the video I posted that explains in greater detail how DLSS 2.0 works. Go watch that one.

It's clear that this old network NVidia provided to Microsoft, is just using standard scaling techniques and combining them with some AI based correction. But it is just working on the lower res output frames, plus training to generate higher resolution

That was close to how DLSS 1.0 worked.

But DLSS 2.0 does NOT work like that. DLSS 2.0 requires direct hooks into the game engine to build a rich data set. It works more like advanced AI driven checkerboard rendering. You have to do very specific setup. You adjust the Mip Bias to feed the higher resolution textures, more typical of the output resolution, than the input resolution. Then you set up a Halton pixel jitter pattern, to do mult-sampling of pixels, the bigger the upscale, the more jitter phases.

This is a very advanced multi sample spatial-temporal sampling, along with feeding in higher than normal quality textures. Now all of this data has what you need to reconstruct a higher resolution image, but writing an algorithm to actually reconstruct the image from this complex input data, would bury a human in endless corner cases. The AI is then trained to do the reconstruction from the this rich data.

DLSS 1.0 and that conference video, are simply doing scaling with AI corrections. Which has demonstrably mixed results.

DLSS 2.0 is doing advanced AI reconstruction from a rich data set.

They are nothing alike.

DLSS 1.0 is AI aided Scaling. DLSS 2.0 is AI aided advanced Image reconstruction.

A little logic as well, would tell you it's a lot more likely that in 2018 they would share a trained network closer DLSS 1.0 than DLSS 2.0.
Last edited:
Thread starter Similar threads Forum Replies Date
Krteq Graphics Cards 5