Question DLSS 2.0

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
There is no thread on this, and it's worth one because it seems to finally be realising it's potential. There's many articles discussing it elsewhere - it's used by wolfenstein young blood, control and a few other games and it works really well. Allegedly it's easy to add - no per game training required. Gives a good performance increase and looks really sharp.

Nvidia article.
wccf article.
eurogamer article

The above articles have some good comparison screen shots that really demonstrate what it can do.

Discuss...
 
  • Like
Reactions: DXDiag

coercitiv

Diamond Member
Jan 24, 2014
6,151
11,686
136
When it works, it's scary how good they are.
And when it doesn't, it feels like an old smartphone camera software took over: sharpening and edge contrast enhancement all over.

dlss2.gif

I really like the improvements in DLSS 2.0 and I think this type of tehnique may very well be the future, but as long as there's little will to openly discuss both pros and cons, I doubt threads like this will go anywhere. The flaws of DLSS 1.0 where an order of magnitude more obvious, yet it was just as fiercely defended.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,222
5,224
136
I said they both using same technologies (Convolution, Inferencing etc) to upscale and Anti Aliasing. Just because one is an API doesnt change the fact that they both does the same thing using same technologies.

They are not.
DirectML is just an API. It's not an upscaler/AA technique.

DLSS is a specific Image reconstruction/AA technique using a trained NN model.

The Application is NOT the API.

Of course you can take the trained Network and run it on DirectML. The whole point of Neural Networks is they are HW agnostic, they can be run on nearly any HW. Preferentially Tensor cores designed for the job, but you can fall back to GPU Math cores, and they are lacking, fall back to running on CPU. Nothing new about any of this.


You can see DirectML upscaling using Convolution (same as DLSS 2.0 ) by using a NVIDIA trained model almost two years before NVIDIA released DLSS 2.0.

That isn't remotely like DLSS 2.0. It's a fairly basic upscaler. That just works on the final video output and upscales it. It's likely early work that lead into DLSS 1.0. But from the looks of things, it isn't actually DLSS 1.0 either.

DLSS 2.0 implementation functions totally different from DLSS 1.0 and the both appear different than this early Trained Network demoed here.

But yeah, if someone provides the Trained Network, you can run it through DirectML. Because DirectML is an API for ML (Machine Learning - AKA Neural Networks).

All this demo shows is putting Microsoft API between the Model NVidia trained, and the Titan V card it's running on.

Trained Network -> NVidia API -> NVidia Tensor HW execution.

becomes:

Trained Network -> Microsoft API -> NVidia API -> NVidia Tensor HW execution.


If NVidia provided their new DLSS 2.0 Trained network, you could run it on DirectML, there is no doubt, though if you lacked the underlying HW to accelerate, it's hard to say how well it would perform.

Of course the final important bit, is that the trained network is a proprietary work product, they won't be sharing.

AMD will have to develop their own trained network to compete with.
 

DJinPrime

Member
Sep 9, 2020
87
89
51
I said they both using same technologies (Convolution, Inferencing etc) to upscale and Anti Aliasing. Just because one is an API doesnt change the fact that they both does the same thing using same technologies.

For those that haven't watched it before, here I will post the video again bellow,

You can see DirectML upscaling using Convolution (same as DLSS 2.0 ) by using a NVIDIA trained model almost two years before NVIDIA released DLSS 2.0.

It's because you're using terms incorrectly, it was hard to understand what you're trying to say. API are software interfaces that do certain things, usually very specific low level things. So, there are many of them that make up some library. See the Cuda documentation link previously. DirectML is a different library. The APIs are what software developers use to interact with the underlying hardware to do things like convolution and inference. That's why guidryp and Tup3x is telling you that you're mixing things up by comparing an API (which is low level and does not produce higher level functionality) and DLSS is an application that use Cuda APIs. And see my explanation of AI training, then you understand that just because there's DirectML, it does not automatically means you can do DLSS. You can, but you have to do work unless someone share their already trained model. Just like the Siggraph, where NV gave the model which runs on the application that used the DirectML APIs.

Hope that clears up your understanding. Technology terms needs to be use properly, otherwise it create confusion.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,222
5,224
136
And when it doesn't, it feels like an old smartphone camera software took over: sharpening and edge contrast enhancement all over.

I agree on the sharpening part. They have put too much default sharpening. I think this might be a response to AMD scaling with sharpening. Though I read there is supposed to be user adjustable control for this soon but right now it's left to the developer to set the amount. Unfortunately it seems more people are wowed by excess sharpening, than are offended by it's artifacts. As always it sucks to be in the minority when thing are aimed at the majority which don't match your viewpoint.

I really like the improvements in DLSS 2.0 and I think this type of tehnique may very well be the future, but as long as there's little will to openly discuss both pros and cons, I doubt threads like this will go anywhere. The flaws of DLSS 1.0 where an order of magnitude more obvious, yet it was just as fiercely defended.

That isn't what I have seen. When DLSS 1.0 was dissected by channels like Hardware Unboxed, and was roundly criticized, I saw few defenders of it. But with Digital Foundry and HWUB are now praising it, people are defending it.

For me it's night and day, DLSS 1.0 was a failure. With 2.0 there is a bit too much Sharpening for my taste, and I hope that gets toned down or gets user control, but even with that, I would use DLSS 2.0 even if it provided no performance gains. Because IMO it's a vastly superior AA method. Still images don't do it justice. You have to watch the motion results.

On small detail in motion, render and normal AA technique can fall apart creating a jitter of small elements on screen which stands out like a sore thumb. But DLSS 2.0 stabilizes these elements to a degree I haven't seen anywhere else.

Though it still has artifacts especially around particle effects, but for me the visual tradeoff falls in favor of DLSS even before we get to the performance gains.

Adjust the sharpening and fix the particle effect artifacts, and I think it would be a winner for everyone.
 

DJinPrime

Member
Sep 9, 2020
87
89
51
And when it doesn't, it feels like an old smartphone camera software took over: sharpening and edge contrast enhancement all over.

View attachment 30805

I really like the improvements in DLSS 2.0 and I think this type of tehnique may very well be the future, but as long as there's little will to openly discuss both pros and cons, I doubt threads like this will go anywhere. The flaws of DLSS 1.0 where an order of magnitude more obvious, yet it was just as fiercely defended.
Actually, NV is pretty upfront with DLSS 2.0. As I complained in another thread, the problem is that the youtube reviewers are not technical and didn't do a good job explaining things. I didn't pay attention to DLSS because I don't have a turing card. I only got interested because I'm getting an Ampere card. But with just some basic understanding of Neural networks, after reading just the Nvidia page on DLSS 2, it was clear to me what they're doing. With their 16K training, it easily explain why the letters on the backpack of death stranding was clearer than native 4k. The NN knew what letters looks like, so it can create it clearly. And if you understand NN, you will know that not all output will match the expected results. In real world situations, that's impossible, because there's no one algorithm or program that can properly predict every input set. And this is why the AI require constant training to account for new things and improve things, but it's never going to be perfect. And it seems DLSS 2.0 requires vectors, not just the static image, which seems to be the cause of some of the abnormalities.
DLSS is really cool technology, using AI this way is thinking out of the box. It's not your traditional upscaler and AA.
 

Hitman928

Diamond Member
Apr 15, 2012
5,182
7,632
136
Are you absolutely sure that Nvidia's NN knows what letters look like? Because I just showed you an example where their NN did not know how a wire fence looks like.

The lettering in Control is pretty bad with DLSS as well. It's a lot better in 2.0 than 1.9, but still not good. I imagine anything with high contrast but small details will be difficult for the NN for some time still.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,222
5,224
136
The lettering in Control is pretty bad with DLSS as well. It's a lot better in 2.0 than 1.9, but still not good. I imagine anything with high contrast but small details will be difficult for the NN for some time still.

I haven't really seen where it's worse than native. Every time I look at some random video of Control like this one, about the only thing different between native and DLSS 2.0 that I can see, is higher sharpness on DLSS 2.0.

Otherwise detail looks essentially identical:
 

Hitman928

Diamond Member
Apr 15, 2012
5,182
7,632
136
I haven't really seen where it's worse than native. Every time I look at some random video of Control like this one, about the only thing different between native and DLSS 2.0 that I can see, is higher sharpness on DLSS 2.0.

Otherwise detail looks essentially identical:

Digital Foundry covered it in their video. Large lettering is fine but smaller lettering like on wall signs is noticeably worse.

 

Heartbreaker

Diamond Member
Apr 3, 2006
4,222
5,224
136
Digital Foundry covered it in their video. Large lettering is fine but smaller lettering like on wall signs is noticeably worse.

Bear in mind that's noticeable at 800% magnification of a still frame on text so small it's mosly illegible to start even on native.

Numerous times when he is pointing out artifact at 800% magnification you can see a pop in saying "Not visible at normal viewing distance!". Like here:

A little later he says the same out loud, and goes on to say that at normal viewing distance it's very hard to tell, and he was checking performance to see which mode he was in.

Also those artifact comparisons were in "performance" mode. They would be lessened in "quality" mode.

Video also points out that the DLSS 2.0 Sharpening effect is adjustable by game devs at this point. Lowering it would have reduced some of those artifacts.

Bottom line, Yes, there are still small artifacts you can find pixel peeping at 800%, but by no means does it look bad at actual output resolution of the monitor.

Even performance mode is quite difficult to distinguish from native.

Devs should better tune sharpness to be closer to the native presentation, or even better give users access to that setting.
 

Hitman928

Diamond Member
Apr 15, 2012
5,182
7,632
136
Bear in mind that's noticeable at 800% magnification of a still frame on text so small it's mosly illegible to start even on native.

Numerous times when he is pointing out artifact at 800% magnification you can see a pop in saying "Not visible at normal viewing distance!". Like here:

A little later he says the same out loud, and goes on to say that at normal viewing distance it's very hard to tell, and he was checking performance to see which mode he was in.

Also those artifact comparisons were in "performance" mode. They would be lessened in "quality" mode.

Video also points out that the DLSS 2.0 Sharpening effect is adjustable by game devs at this point. Lowering it would have reduced some of those artifacts.

Bottom line, Yes, there are still small artifacts you can find pixel peeping at 800%, but by no means does it look bad at actual output resolution of the monitor.

Even performance mode is quite difficult to distinguish from native.

Devs should better tune sharpness to be closer to the native presentation, or even better give users access to that setting.

You don't need 800% magnification to notice the sign letter issue. You can plainly see it in the video at 10:40 at normal viewing distances and the camera isn't even close to the sign. Pretty much all the issues you can see at normal viewing differences if you stop and really look at the screen but aren't likely to notice when actually playing the game. I agree DLSS2.0 has come a long way and many gamers with 4K screens may choose to use it, just pointing out it's not quite perfect yet but is definitely the best upscale technology on the market today.
 
  • Like
Reactions: Tlh97

Heartbreaker

Diamond Member
Apr 3, 2006
4,222
5,224
136
You don't need 800% magnification to notice the sign letter issue. You can plainly see it in the video at 10:40 at normal viewing distances and the camera isn't even close to the sign. Pretty much all the issues you can see at normal viewing differences if you stop and really look at the screen but aren't likely to notice when actually playing the game. I agree DLSS2.0 has come a long way and many gamers with 4K screens may choose to use it, just pointing out it's not quite perfect yet but is definitely the best upscale technology on the market today.

There is a visible difference, but it isn't that noticeable, IMO it mainly looks like the aggressive sharpening that I acknowledge as an issue. The Native sign is softer and paler, the DLSS version are darker and more pronounced with some edge artifacting. If they dialed back on the sharpening, the edge artifacts would decline.
 

Hitman928

Diamond Member
Apr 15, 2012
5,182
7,632
136
Looking through the second video posted, there's definitely some starker differences. For instance:

1601660765874.png

The DLSS2.0 is sharper for sure, but it also makes the black colors in the sweat shirt much more pronounced. The face is also different due to coloring and shading, she almost looks like a different person with more masculine facial features.

At the 1:00 mark the whole scene appears brighter with DLSS2.0 and halo effects in many places, both probably due to sharpening but then at 2:10 mark you see texture details in her hair tie that aren't present in the native image but I don't know how much of that is just the sharpening feature as well.

Anyway, not gonna go through second by second but overall it's impressive tech. Hopefully Nvidia keeps improving on it to clean up the issues here and there.
 
  • Like
Reactions: Tlh97

DJinPrime

Member
Sep 9, 2020
87
89
51
Are you absolutely sure that Nvidia's NN knows what letters look like? Because I just showed you an example where their NN did not know how a wire fence looks like.
That's how AI works, they don't "know" like how we human knows things, but they do know given a pattern of low res pixels, it is most likely the letter "A". As for why wire fence might not look like what you expect, well, that's probably due to whatever input set NV used to train the AI. Because they're using generic graphics (which we have no idea what they are, cause it's part of the secret sauce), it might not match exactly. According to NV, with DLSS 2.0, they're just using 1 model. They're the expert and all, but it would seems to me that if they use multiple models, say for different types of games, then their NN model will even be more accurate. Like one for shooters and a different one for driving game, etc. It's still generic, unlike DLSS 1.0, but more tailored for different types of game or graphical style.

The DLSS2.0 is sharper for sure, but it also makes the black colors in the sweat shirt much more pronounced. The face is also different due to coloring and shading, she almost looks like a different person with more masculine facial features.

At the 1:00 mark the whole scene appears brighter with DLSS2.0 and halo effects in many places, both probably due to sharpening but then at 2:10 mark you see texture details in her hair tie that aren't present in the native image but I don't know how much of that is just the sharpening feature as well.
That's the thing, DLSS is not your traditional upscale where you basically just duplicate the pixel and then use AA and sharpening algorithms to make it look nice. That's what make the image soft. DLSS is different because it's using AI to recreate the image. That's why it's sharp, because it's not just blurring things to get rid of the jaggies from upping the resolution. And the difference unfortunately is because the AI is recreating the image base on what it thinks the image should look like. The better trained the AI, the more accurate it will be, but it's never going to be perfect.

Understand the technology and the trade offs and decide if you want to use it. Personally I will take the FPS gain and live with some of the weird images generated by the computer. But if it's so jarring where it impacts my enjoyment of the game, then I wouldn't use it.
 
  • Like
Reactions: Elfear

Hitman928

Diamond Member
Apr 15, 2012
5,182
7,632
136
That's how AI works, they don't "know" like how we human knows things, but they do know given a pattern of low res pixels, it is most likely the letter "A". As for why wire fence might not look like what you expect, well, that's probably due to whatever input set NV used to train the AI. Because they're using generic graphics (which we have no idea what they are, cause it's part of the secret sauce), it might not match exactly. According to NV, with DLSS 2.0, they're just using 1 model. They're the expert and all, but it would seems to me that if they use multiple models, say for different types of games, then their NN model will even be more accurate. Like one for shooters and a different one for driving game, etc. It's still generic, unlike DLSS 1.0, but more tailored for different types of game or graphical style.


That's the thing, DLSS is not your traditional upscale where you basically just duplicate the pixel and then use AA and sharpening algorithms to make it look nice. That's what make the image soft. DLSS is different because it's using AI to recreate the image. That's why it's sharp, because it's not just blurring things to get rid of the jaggies from upping the resolution. And the difference unfortunately is because the AI is recreating the image base on what it thinks the image should look like. The better trained the AI, the more accurate it will be, but it's never going to be perfect.

Understand the technology and the trade offs and decide if you want to use it. Personally I will take the FPS gain and live with some of the weird images generated by the computer. But if it's so jarring where it impacts my enjoyment of the game, then I wouldn't use it.

They are definitely using a sharpening filter with DLSS2.0, it's very obvious and they've even commented that the sharpening amount could be user controlled in the future, it's at least a good part of why the DLSS2.0 images look sharper than native but also give artifacts like the halo effect.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,222
5,224
136
Welcome to the wonderful world of forced TAA.

These days some game engines use TAA to clean up noise as well as do AA, so they force it on. In that GTC DLSS video I posted on the previous page, there is a section on how to integrate DLSS 2.0 into game engines.

One requirement was if you use TAA to denoise your game, you have to come up with another way to denoise, since you have to feed a clean image to DLSS 2.0, since it can't work with noisy content, and it doesn't replace that aspect of TAA.

So DLSS will be more work for some devs than others depending on what they do with TAA.
 

DJinPrime

Member
Sep 9, 2020
87
89
51
They are definitely using a sharpening filter with DLSS2.0, it's very obvious and they've even commented that the sharpening amount could be user controlled in the future, it's at least a good part of why the DLSS2.0 images look sharper than native but also give artifacts like the halo effect.
I'm not going to say you're wrong, because I don't know their code or the NN algorithm. But the "sharpening" part could be part of the NN or apply after the image is generated by the NN. I don't know if that makes any difference whether that's inside the NN or outside. You can control how the NN model runs, by adjusting the weight of the NN nodes (settings value) you will get a different result.

Looking at the comments, I see that some of you are still not buying the AI part and still thinking it's some sort of AA trick and just NV marketing. If that's the case, it wouldn't generate more detail than the original image. I'm not sure if there's anymore I can say to convince you otherwise. Neural Network technology is real and is consistent with what NV have said about DLSS.
 

Hitman928

Diamond Member
Apr 15, 2012
5,182
7,632
136
I'm not going to say you're wrong, because I don't know their code or the NN algorithm. But the "sharpening" part could be part of the NN or apply after the image is generated by the NN. I don't know if that makes any difference whether that's inside the NN or outside. You can control how the NN model runs, by adjusting the weight of the NN nodes (settings value) you will get a different result.

Looking at the comments, I see that some of you are still not buying the AI part and still thinking it's some sort of AA trick and just NV marketing. If that's the case, it wouldn't generate more detail than the original image. I'm not sure if there's anymore I can say to convince you otherwise. Neural Network technology is real and is consistent with what NV have said about DLSS.

I understand how NN work, I use them at work for certain tasks and I understand how DLSS2.0 works. I also understand that more detail doesn't necessarily mean good detail. Lastly, I also know what it looks like when you apply a sharpening filter and it's obvious that it's being used in DLSS2.0, at least in Control.
 

coercitiv

Diamond Member
Jan 24, 2014
6,151
11,686
136
Looking at the comments, I see that some of you are still not buying the AI part and still thinking it's some sort of AA trick and just NV marketing.
That's your interpretation, not something some of us are saying.

As far as sharpening goes, it is part of DLSS, we know this from Nvidia developers quoted in one of the videos above. Showing how sharpening behaves on native images was not intended to suggest less of DLSS, but rather help people understand where some of the perceived advantages and disadvantages of the DLSS ON come from. Sharpening is useful in instances where DLSS manages to introduce more detail than TAA, but unfortunately it seems to be set a bit too aggressive by default. According to engineers we're likely going to get adjustable control over this feature.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,222
5,224
136
No doubt it is being sharpened by intention, and they have a control there that game devs can adjust, and it's really irrelevant if this comes from Neural Net or conventional coded algorithm. Though it would be better if the user had some control because tastes differ on the quantity of sharpness desired. Or even the same person can differ on desired sharpness depending on use case. Main important issue on Sharpening is: give us an override.

But we shouldn't get too lost in the weeds on sharpening. There is also no doubt that the reconstruction is done by AI, and doing a very good job. It's a significant step up over the DLSS 1.9 which was reconstruction with traditional coded algorithm.
 

AtenRa

Lifer
Feb 2, 2009
14,000
3,357
136
That isn't remotely like DLSS 2.0. It's a fairly basic upscaler. That just works on the final video output and upscales it. It's likely early work that lead into DLSS 1.0. But from the looks of things, it isn't actually DLSS 1.0 either.

DLSS 2.0 implementation functions totally different from DLSS 1.0 and the both appear different than this early Trained Network demoed here.

DLSS 2.0 is using Convolution and Inferencing to upscale images and do AA by using a Machine Learning trained model (Nvidia calls it Convolutional Autoencoder ) , that is exactly what they displayed in the DirectML video above.

They clearly say in the video that they are using a NVIDIA Model called "Autoencoder" (time of video 19.43) for their Super Resolution Neural Network. So either you havent seen the video or you talking about something else because what they showcased was exactly what DLSS 2.0 does.