Question DLSS 2.0

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
There is no thread on this, and it's worth one because it seems to finally be realising it's potential. There's many articles discussing it elsewhere - it's used by wolfenstein young blood, control and a few other games and it works really well. Allegedly it's easy to add - no per game training required. Gives a good performance increase and looks really sharp.

Nvidia article.
wccf article.
eurogamer article

The above articles have some good comparison screen shots that really demonstrate what it can do.

Discuss...
 
  • Like
Reactions: DXDiag

AtenRa

Lifer
Feb 2, 2009
14,000
3,357
136
DirectML/DLSS or any Machine Learning Super Sampling tech are extremely useful and create much better results than Native + TAA, when used in the 2x mode (quality mode).

With the Quality mode we get higher performance, much better image quality and much better AntiAliasing vs Native + TAA.
Im sure the vast majority of the coming games next year and forward, will be using a Machine Learning SuperSampling/Super AntiAliasing like MS DirectML and NVIDIA DLSS.
Especially those games that will have Ray Tracing effects, will see a tremendous benefit from this technology, especially in next gen consoles.
 
  • Like
Reactions: Tlh97 and DXDiag

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
DirectML/DLSS or any Machine Learning Super Sampling tech are extremely useful and create much better results than Native + TAA, when used in the 2x mode (quality mode).

With the Quality mode we get higher performance, much better image quality and much better AntiAliasing vs Native + TAA.
Im sure the vast majority of the coming games next year and forward, will be using a Machine Learning SuperSampling/Super AntiAliasing like MS DirectML and NVIDIA DLSS.
Especially those games that will have Ray Tracing effects, will see a tremendous benefit from this technology, especially in next gen consoles.
Do the next gen consoles have the AI hardware? If not how are they going to run the machine learning algorithms - if they have to use shaders then that will take far to much time to compute being as those aren't specialised for the job, and considering it's those same shaders doing the rest of the rendering pipeline and ray tracing. Until AMD add dedicated hardware support to their gpu's I very much suspect this stays Nvidia only. Perhaps the consoles will have a mid life upgrade in a few years and get it then.
 

AtenRa

Lifer
Feb 2, 2009
14,000
3,357
136
Do the next gen consoles have the AI hardware? If not how are they going to run the machine learning algorithms - if they have to use shaders then that will take far to much time to compute being as those aren't specialised for the job, and considering it's those same shaders doing the rest of the rendering pipeline and ray tracing. Until AMD add dedicated hardware support to their gpu's I very much suspect this stays Nvidia only. Perhaps the consoles will have a mid life upgrade in a few years and get it then.

If you are referring to Tensor Cores then NO, consoles will not have those. Next Gen Consoles will not have fix-function hardware for ML , i believe they will have for RT.
But even without fix-function hardware they will be able to use DirectML (XBOX) for SuperSampling and SuperAntiAliasing. They have Async Compute, Rapid Packed Math and 8bit-4bit and 4bit-INT support to do ML.

Xbox-one-xxx.png
 
  • Like
Reactions: Tlh97

FiendishMind

Member
Aug 9, 2013
60
14
81
The ML model was created by NVIDIA. They used it in the Microsoft DirectML to upscale the image in the demo.
Right, but as a concept and regarding the developmental work, it seems MLSS/DLSS was very much "invented" by Nvidia. Also, looking over the DirectML papers and presentations I can find, Nvidia's collaborations and contributions during it's development dwarf any other hardware vendor. DXR is pretty much the same situation.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
Do the next gen consoles have the AI hardware? If not how are they going to run the machine learning algorithms - if they have to use shaders then that will take far to much time to compute being as those aren't specialised for the job, and considering it's those same shaders doing the rest of the rendering pipeline and ray tracing. Until AMD add dedicated hardware support to their gpu's I very much suspect this stays Nvidia only. Perhaps the consoles will have a mid life upgrade in a few years and get it then.

Tensor cores aren't 'AI Hardware'. They perform a specialized form of matrix math. General purpose cores can also do this math, but there is a lot more overhead, so the Tensor cores end up doing this faster. They have a pretty limited usage in games, but DLSS calculations are one of those.

Navi 2 (which both consoles have) has RT support, AMD has not yet detailed their exact implementation yet. And while Navi 1 can run RT code (albeit not well), I think we should fully expect Navi 2 to have hardware support of some manner. Although according to some leaks late last year, AMD's approach is different on the hardware side.
 
  • Like
Reactions: Tlh97

DXDiag

Member
Nov 12, 2017
165
121
116
AMD's solution happens at the driver level so it's not about the monitor supporting it. The only caveat is that there needs to be a wide enough range in VRR for it to be activated. Same as Gsync.
That's the relevant caveat here, it needed a monitor with a ratio of 2.5 max/min refresh rate. G-Sync monitors needed none of that, G-Sync monitors function even with a max refresh rate of 60Hz.

Local dimming, what does that have to do with VRR?
VRR with HDR, not VRR alone, VRR with HDR requires certain quality standards, otherwise the HDR experience will not be complete.

BTW, AMD has moved past Freesync 2 and now was a couple of tiers I believe (Premium and Pro or something like that). Again, this is just marketing that says these monitors with these stickers have a minimum set of features, there's nothing technical about it.
Marketing sets the requirements, which sets the standard later. FreeSync Premium Pro is the same as FreeSync 2 HDR, just renamed.

As was already pointed out, Gsync didn't have HDR from the beginning because HDR didn't exist when Gsync launched. "Freesync 2" with HDR support was actually announced at the same time Nvidia announced HDR support for Gsync although (if memory serves me correctly)
Which gets us back to my point, FreeSync 2 didn't require any quality HDR implementation, it just required bare minimum HDR support -as usual-, which turned out to be HDR400, when that didn't provide any quality HDR experience, AMD increased it to HDR600, while G-Sync required a whole set of higher features from the get go: Local Dimming, HDR1000, Variable Overdrive .. etc. The results is: that the only good HDR monitors out there are G-Sync only, while very few FreeSync monitors actually support HDR1000 (without Local Dimming or complete VRR range).

This is the crux of the matter, one standard forces the a set of quality requirements, the other doesn't. G-Sync now encompasses everything from medium quality G-Sync compatible displays (low frame compensation), to Ultra high end G-Sync Ultimate HDR displays, NVIDIA GPUs work with every FreeSync and G-Sync monitor, AMD GPUs are stuck to FreeSync monitors, which still lack ultra high end HDR models to this day.

You can half bake a standard to get it to be widely adopted, but that will not do you any favors on the long run, look at how monitor makers are racing ahead to get the G-Sync compatible badge, dumping the FreeSync badge on the way or pushing it to the sides. The G-Sync brand was associated with quality from the beginning, FreeSync wasn't. Hence the current situation.
 

Hitman928

Diamond Member
Apr 15, 2012
5,177
7,628
136
That's the relevant caveat here, it needed a monitor with a ratio of 2.5 max/min refresh rate. G-Sync monitors needed none of that, G-Sync monitors function even with a max refresh rate of 60Hz.


VRR with HDR, not VRR alone, VRR with HDR requires certain quality standards, otherwise the HDR experience will not be complete.


Marketing sets the requirements, which sets the standard later. FreeSync Premium Pro is the same as FreeSync 2 HDR, just renamed.


Which gets us back to my point, FreeSync 2 didn't require any quality HDR implementation, it just required bare minimum HDR support -as usual-, which turned out to be HDR400, when that didn't provide any quality HDR experience, AMD increased it to HDR600, while G-Sync required a whole set of higher features from the get go: Local Dimming, HDR1000, Variable Overdrive .. etc. The results is: that the only good HDR monitors out there are G-Sync only, while very few FreeSync monitors actually support HDR1000 (without Local Dimming or complete VRR range).

This is the crux of the matter, one standard forces the a set of quality requirements, the other doesn't. G-Sync now encompasses everything from medium quality G-Sync compatible displays (low frame compensation), to Ultra high end G-Sync Ultimate HDR displays, NVIDIA GPUs work with every FreeSync and G-Sync monitor, AMD GPUs are stuck to FreeSync monitors, which still lack ultra high end HDR models to this day.

You can half bake a standard to get it to be widely adopted, but that will not do you any favors on the long run, look at how monitor makers are racing ahead to get the G-Sync compatible badge, dumping the FreeSync badge on the way or pushing it to the sides. The G-Sync brand was associated with quality from the beginning, FreeSync wasn't. Hence the current situation.

That's a lot of words to basically say, I prefer Nvidia's branding. That's fine, but it's all just marketing. I don't care 1 bit about that and it has nothing to do with Freesync not having "proper" VRR.
 
Last edited:
  • Like
Reactions: Tlh97

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
This is the crux of the matter, one standard forces the a set of quality requirements, the other doesn't. G-Sync now encompasses everything from medium quality G-Sync compatible displays (low frame compensation), to Ultra high end G-Sync Ultimate HDR displays, NVIDIA GPUs work with every FreeSync and G-Sync monitor, AMD GPUs are stuck to FreeSync monitors, which still lack ultra high end HDR models to this day.

You can half bake a standard to get it to be widely adopted, but that will not do you any favors on the long run, look at how monitor makers are racing ahead to get the G-Sync compatible badge, dumping the FreeSync badge on the way or pushing it to the sides. The G-Sync brand was associated with quality from the beginning, FreeSync wasn't. Hence the current situation.

You seem to be in the mindset that just because the lower end requirement is low, that no high end options exist, which is untrue. By nVidia requiring very high standards it drove the price up to ridiculous levels.

FreeSync having lower base standards means the average person can still go out and buy a monitor that has freesync support, and some level of HDR, which will be better than a monitor with neither of those. But, people that want to spend more, still can. Because there are high end monitors with better specs, such as this ASUS: https://www.newegg.com/black-asus-rog-strix-xg438q-90lm04u0-b011b0-43/p/N82E16824236989
 

GodisanAtheist

Diamond Member
Nov 16, 2006
6,715
7,004
136
A proprietary technology being *better* (in terms of quality) than existing alternatives doesn't necessarily translate into it having a future or being relegated to any more than a curiosity.

Gsync is *better*, but it has not seen broad adoption or comparable adoption to Freesync.

DLSS 2.0 is *better* than TAA and other established AA methods, but time will tell if it sees broad adoption (although the trend is against it) or becomes another interesting curio in the archives of GPU history.
 
  • Like
Reactions: Tlh97 and DXDiag

Guru

Senior member
May 5, 2017
830
361
106
Nope, sources have tested:
-Control
-MechWarrior 5
-Wolfesntein Young Blood



You can see whatever you like, all sources have concluded that DLSS 2 is equal or better than native res + TAA, whether Digital Foundry (in their Control and Wolfesntein analysis), or TechSpot, or Overclock3d, or the dozens of other sources on Youtube and tech sites. It's also my personal observation. You are free to think whatever you want, but that's probably your tainted view, nothing more, and goes against what testers have experienced.
Control, from the techspot source YOU provided. First picture, LOWER quality ovet native, second picture, LOWER quality over native, third picture LOWER quality over native.

In every single picture, DLSS 2.0 textures are darker, have reduced detail and are oversharpened, in every single picture it has lower quality.

Wolfenstein young blood, again in first picture comparison DLSS looks darker, texture color looks more monotone, but i'd give it that it looks sharper. TSSAA does look blurrier. So that is worse textures, worse colors, darker image, but sharper look.

Second image DLSS does looks darker, but texture quality is very similar, but the colors in that scene are already way too monotone, bright white and black, DLSS does look sharper once again compared to the TSSAA 8x they are using.

Forth pic, I'm skipping 3rd because its way too dark to discern much from it, so we have TSSAA 8x, dlss quality and smaa. TSSAA does look blurry, but it definitely has softer edges, there is no jagged edges, it looks polished, but blurry. DLSS does look sharper again, but again even in this rather dark scene it does tend to darken the image even further, noticeable loss of detail, you can clearly see more dirt and rust and worn textures on the TSSAA and especially SMAA. Again its small things, its not too obvious, but ITS THERE, its LOWER QUALITY. It is removing some detail and some color palette and range and is darkening stuff to make it process faster.

SMAA vs DLSS 2.0, SMAA wins, it looks sharper, it has higher detail, it has broader color palette, it does have slightly more jagged edges compared to DLSS 2.0 though. So SMAA wins with native resolution wins.

Again this is why even Nvidia doesn't claim the absurd thing that DLSS has BETTER image quality than native, ONLY YOU in the whole world claim this nonsense. Nvidia would rather people focus on the performance aspect, rather than quality one, but leave it to Nvidia worshipers to make this giant fuss how DLSS 2.0 is the second coming of christ, better than sliced bread, best thing invented since hot water, etc...

Again kudos to Nvidia for being able to lower quality and mask it decently to offer more performance, fine, it is definitely worse looking in general than native + any AA I've seen tested. Biggest consistent themes coming up are darker textures, loss of details, quite a bit of a monotone color palette which makes objects looks bland, it is though sharper to a certain extent over some AA and thus can look better in that aspect.

As I said in my previous post, it's like a pretty decent art student who is just starting out vs a seasoned professional painter. DLSS is working with a smaller color palette and way less range of details and quality. Its still good for a novice painter, but its imitating original art.
 
  • Like
Reactions: Tlh97 and Stuka87

TestKing123

Senior member
Sep 9, 2007
204
15
81
Control, from the techspot source YOU provided. First picture, LOWER quality ovet native, second picture, LOWER quality over native, third picture LOWER quality over native.

In every single picture, DLSS 2.0 textures are darker, have reduced detail and are oversharpened, in every single picture it has lower quality.

Wolfenstein young blood, again in first picture comparison DLSS looks darker, texture color looks more monotone, but i'd give it that it looks sharper. TSSAA does look blurrier. So that is worse textures, worse colors, darker image, but sharper look.

Second image DLSS does looks darker, but texture quality is very similar, but the colors in that scene are already way too monotone, bright white and black, DLSS does look sharper once again compared to the TSSAA 8x they are using.

Forth pic, I'm skipping 3rd because its way too dark to discern much from it, so we have TSSAA 8x, dlss quality and smaa. TSSAA does look blurry, but it definitely has softer edges, there is no jagged edges, it looks polished, but blurry. DLSS does look sharper again, but again even in this rather dark scene it does tend to darken the image even further, noticeable loss of detail, you can clearly see more dirt and rust and worn textures on the TSSAA and especially SMAA. Again its small things, its not too obvious, but ITS THERE, its LOWER QUALITY. It is removing some detail and some color palette and range and is darkening stuff to make it process faster.

SMAA vs DLSS 2.0, SMAA wins, it looks sharper, it has higher detail, it has broader color palette, it does have slightly more jagged edges compared to DLSS 2.0 though. So SMAA wins with native resolution wins.

Again this is why even Nvidia doesn't claim the absurd thing that DLSS has BETTER image quality than native, ONLY YOU in the whole world claim this nonsense. Nvidia would rather people focus on the performance aspect, rather than quality one, but leave it to Nvidia worshipers to make this giant fuss how DLSS 2.0 is the second coming of christ, better than sliced bread, best thing invented since hot water, etc...

Again kudos to Nvidia for being able to lower quality and mask it decently to offer more performance, fine, it is definitely worse looking in general than native + any AA I've seen tested. Biggest consistent themes coming up are darker textures, loss of details, quite a bit of a monotone color palette which makes objects looks bland, it is though sharper to a certain extent over some AA and thus can look better in that aspect.

As I said in my previous post, it's like a pretty decent art student who is just starting out vs a seasoned professional painter. DLSS is working with a smaller color palette and way less range of details and quality. Its still good for a novice painter, but its imitating original art.

Sorry, your post is full of manufactured BS. I bet if someone gave you two unlabeled screenshots, you wouldn't be able to tell the difference between DLSS 2.0 and native 4k. None of your descriptions appear to have any validity, in fact why don't you post up screenshots highlighting these differences you are seeing? You know.. darker, loss of detail, monotone pallete, etc... Should be easy to circle/highlight? Or are you one of those people that manufacture dissent just for the sake of dissent? What testing have you done on your own? On which hardware? And which games?
 
  • Like
Reactions: DXDiag

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
Sorry, your post is full of manufactured BS. I bet if someone gave you two unlabeled screenshots, you wouldn't be able to tell the difference between DLSS 2.0 and native 4k. None of your descriptions appear to have any validity, in fact why don't you post up screenshots highlighting these differences you are seeing? You know.. darker, loss of detail, monotone pallete, etc... Should be easy to circle/highlight? Or are you one of those people that manufacture dissent just for the sake of dissent? What testing have you done on your own? On which hardware? And which games?

You can easily see it on the sites that have it setup to switch back and forth between images. Go here: https://www.eurogamer.net/articles/digitalfoundry-2020-control-dlss-2-dot-zero-analysis

Click 'Launch Comparison tool' (Please note not all of the photos show 4K vs 1440P DLSS). They also messed up on the lower middle image where they have the exact same photo being displayed for 4K Native and 1440P DLSS.

Click the top middle image, and then drag the image to right right to see the 'Director's Office' text and the arrow next to it. Or the edges of anything, which get this speckled look, which is caused by the sharpening filter. You will see some of the images 'look' sharper on DLSS, but this is the sharpening filter at work, which can be run on 4K if one wanted as well.

Unfortunately I cannot find a site that does the same image comparison with Wolfenstein. As it has better texturing than Control (Which is a very muddy looking game with poor texturing).
 

DXDiag

Member
Nov 12, 2017
165
121
116
Control, from the techspot source YOU provided. First picture, LOWER quality ovet native, second picture, LOWER quality over native, third picture LOWER quality over native
Sorry, that's your own bias/perception, the sources I quoted agree with the consesnsus that DLSS 2 looks as good or better than native resolution + TAA, especially during motion where TAA blurs everything out.

download.jpg


 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
Sorry, that's your own bias/perception, the sources I quoted agree with the consesnsus that DLSS 2 looks as good or better than native resolution + TAA, especially during motion where TAA blurs everything out.

In that exact circumstance, the DLSS side looks a bit better. But this is most likely a result of the sharpening filter. It would need to be tested at 4K with the sharpening filter applied there as well (Which is an option that is available).

However, as for the YouTube video, thats entirely invalid. YouTube's compression ruins comparisons unless there is a gigantic difference.

But, as you can see in the screenshot below, the DLSS 2.0 side looks terrible:
1587401066370.png
 
  • Like
Reactions: Tlh97 and Gideon

AtenRa

Lifer
Feb 2, 2009
14,000
3,357
136
Sorry, that's your own bias/perception, the sources I quoted agree with the consesnsus that DLSS 2 looks as good or better than native resolution + TAA, especially during motion where TAA blurs everything out.

View attachment 19894



In that exact circumstance, the DLSS side looks a bit better.

No it doesnt

Look at the red marked places in the picture bellow and you can spot the differences.


You have to compromise with both Technics.
With NATIVE + TAA you get a very smooth picture but at the same time it gives you blurriness.
Machine Learning Super Sampling/Super AntiAliasing give you a lot crispier image + performance uplift (not free, you still have to invest in hardware) but it will produce some artifacts.

Personally i would like to have a smooth image like TAA but without the Blurriness, we can get that with NATIVE + ML AntiAliasing with out the upscaling.

Control-4k-vs-1440p-dlss-1a.jpg
 
  • Like
Reactions: Tlh97

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
No it doesnt

Look at the red marked places in the picture bellow and you can spot the differences.


You have to compromise with both Technics.
With NATIVE + TAA you get a very smooth picture but at the same time it gives you blurriness.
Machine Learning Super Sampling/Super AntiAliasing give you a lot crispier image + performance uplift (not free, you still have to invest in hardware) but it will produce some artifacts.

Personally i would like to have a smooth image like TAA but without the Blurriness, we can get that with NATIVE + ML AntiAliasing with out the upscaling.

I did notice the square looking reflections in her eyes, and that shadows have a square look too. My 'slightly better' remark was mostly at her skin, where you can see a bit more detail there. I should have clarified better.

I think it would be interesting to have the native resolution with the ML AA, that could be a best of both worlds.
 
  • Like
Reactions: Tlh97

DXDiag

Member
Nov 12, 2017
165
121
116
But, as you can see in the screenshot below, the DLSS 2.0 side looks terrible:
Yes every once in a while you get those examples of better native + TAA picture elements, but the overall majority of picture elements look equal or better with DLSS. Every mode has it's advantages and disadvantages.
In that exact circumstance, the DLSS side looks a bit better.

This isn't some "exact same moment, happens every now and then" type of shot, this is the game during motion, so this is the vast majority of the experience of the game, DLSS looks sharper and has better details in every moving frame in the game, and it plays significantly faster too.

Personally I am quite fed up with the hostility -from some members- toward anything NVIDIA does on this forum, it's quite apparent and is quite sickening to be honest, even when every major and non major outlet on the interent praises DLSS 2 -after they have been sharply critical of it in the past-, we still get the same pedantic remarks, and overblown baseless criticism.

What NVIDIA has done here is nothing short of remarkable, taking a 1080p image and then upscale it to look the same as 4K is nothing short of admirable.
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,000
3,357
136
The best compromise with ML SS/DLSS 2.0 is at the quality setting (2x upscaling). We get faster performance + crispier image at 99% of the native image quality most of the time.
 
  • Like
Reactions: Tlh97

beginner99

Diamond Member
Jun 2, 2009
5,208
1,580
136
I usually hate on NV for their proprietary stuff but after all if this works it's maybe a solution for people that use the monitor also for work not just gaming. I bought a 1080p one about 4 years ago simply because I did not want to pay >$700 for GPUs going forward.If you can get a larger 2k or 4k display for work and then not suffer too much when gaming it would be a nice addition.

Having said that, does dlss 2.0 work with vrr? And am I right to assume that VRR and anti-blur aka strobing are still incompatible?

Historically a feature locked to a single vendor almost never becomes ubiquitous.

CUDA is bascially NVs bread and butter. It's the definition of proprietary vendor lock-in feature that is the standard in deep learning.
 
  • Like
Reactions: DXDiag

Gideon

Golden Member
Nov 27, 2007
1,608
3,570
136
Gamers Nexus take:

Benchmarks
Quallity analysis

With games supporting RayTracing DLSS is definitely worth it. I wonder what will be AMD's equivalent. Radeon Sharpening + upscaling isn't bad (from 1440p -> 4K) but they need something much better for RDNA 2 generation
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
Doing that would only make sense if they've included tensor cores (or equivalent), and I don't really see why they'd make that trade off within a games console based GPU.

It would definitely be very debatable. DLSS2.0 seems like a good enough use of the tensor cores now once they're actually included but.
 

AtenRa

Lifer
Feb 2, 2009
14,000
3,357
136
Doing that would only make sense if they've included tensor cores (or equivalent), and I don't really see why they'd make that trade off within a games console based GPU.

It would definitely be very debatable. DLSS2.0 seems like a good enough use of the tensor cores now once they're actually included but.

Tensor cores are not mandatory for doing Machine Learning. DX-12 DirectML can take advantage of any GPU hardware, including Tensor Cores.
In the future most games will use DirectML and not DLSS.