• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Question DLSS 2.0

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

gorobei

Diamond Member
Jan 7, 2007
3,089
236
106
did you not watch atenra's video link? the dev indicated you only need the tensor stuff when training the ai on the AA and once it is trained you can run it without the tensor convolution stuff. which jibes with the story about someone hacking dlss2 to work on the gf1660 and radeon cards.
 

Dribble

Golden Member
Aug 9, 2005
1,838
377
136
Interesting youtube video on DLSS 2.0 vs PS4 checkerboard in death stranding:

Obviously DLSS looks much better then anything else, but the interesting thing I took from the video is it requires motion vectors, and anything missing motion vectors gets trails so you can't just turn it on in some old game. The game must need to provide accurate motion vectors for it to work well.
 
Last edited:

GodisanAtheist

Platinum Member
Nov 16, 2006
2,418
845
136
did you not watch atenra's video link? the dev indicated you only need the tensor stuff when training the ai on the AA and once it is trained you can run it without the tensor convolution stuff. which jibes with the story about someone hacking dlss2 to work on the gf1660 and radeon cards.
-Wonder if we'll eventually get an open AMD alternative trained on CDNA gpus further on down the line.

Wonder if that's what AMD meant when they referred to "Cloud based Ray Tracing" in their initial RDNA2 slides as the next step after RDNA2's implementation.
 

Thala

Golden Member
Nov 12, 2014
1,095
430
136
Interesting youtube video on DLSS 2.0 vs PS4 checkerboard in death stranding:

Obviously DLSS looks much better then anything else, but the interesting thing I took from the video is it requires motion vectors, and anything missing motion vectors gets trails so you can't just turn it on in some old game. The game must need to provide accurate motion vectors for it to work well.
This is totally impressive! Even DLSS in performance mode (e.g. from 1080p->4k) looks much better than 4k checkerboard.
 
  • Like
Reactions: DXDiag

USER8000

Golden Member
Jun 23, 2012
1,517
741
136
This is totally impressive! Even DLSS in performance mode (e.g. from 1080p->4k) looks much better than 4k checkerboard.
Checkerboard on a 2016 ps4 though with a shit graphics card? A GTX1650s is faster than the ps4 pro. Wouldn't a fairer comparison be the new consoles??
 
  • Like
Reactions: beginner99

Thala

Golden Member
Nov 12, 2014
1,095
430
136
Checkerboard on a 2016 ps4 though with a shit graphics card? A GTX1650s is faster than the ps4 pro. Wouldn't a fairer comparison be the new consoles??
I do not compare the performance whatsoever - i am comparing different upscaling solutions purely from a technical and IQ perspective. And what can be achieved with DLSS2.0 is simply mind blowing.
Unfortunately for the new consoles they went with an AMD RDNA 2 solution - which with current information do not provide anything comparable to DLSS2.0.
 
Last edited:
  • Like
Reactions: guidryp

beginner99

Diamond Member
Jun 2, 2009
4,625
1,009
136
This is totally impressive! Even DLSS in performance mode (e.g. from 1080p->4k) looks much better than 4k checkerboard.
It is until you see the "particle effect" issue. No free lunch it seems.

For those not watching the video: If you have particles rising up like simulating fog or heat or whatever with DLSS enabled, they suddenly get very prominent "trails" like a comet. Probably depends on the game how much this is an issue.
 
  • Like
Reactions: USER8000

Thala

Golden Member
Nov 12, 2014
1,095
430
136
It is until you see the "particle effect" issue. No free lunch it seems.

For those not watching the video: If you have particles rising up like simulating fog or heat or whatever with DLSS enabled, they suddenly get very prominent "trails" like a comet. Probably depends on the game how much this is an issue.
I have seen the issues with the particles - makes the technology not anything less impressive. Interestingly if you follow the comments on youtube, people playing the game with DLSS were thinking the trails were intended ...so at least it did not look totally out of place for the audience.

In the bigger picture DLSS is such an amazing technology from an enabling point of view. You freeing up so much GPU performance with literally no degradation (and sometimes even an improvement over native) of IQ, such that you can use this additional performance headroom for much heavier things like raytracing, generally better illumination and many other heavy stuff.

Another view to look at it is from a generational standpoint. You can potentially play games on say a RTX2060 with DLSS with similar IQ and framerate then on a RTX4060 3 years from now without DLSS. In some sense DLSS is up-leveling technology by 2 generations. I know it is a somewhat simplified view - but you should get the point.
 
Last edited:

beginner99

Diamond Member
Jun 2, 2009
4,625
1,009
136
I know it is a somewhat simplified view - but you should get the point.
Well I'm waiting for the releases as I need a new GPU, still on a 290x and it's getting old...It's likely I will go with NV this round not because of DLSS but data sciene (CUDA). DLSS will only be a potental added bonus albeit I expect to get less performance/$ going with NV.
 

DXDiag

Member
Nov 12, 2017
161
110
86
It is until you see the "particle effect" issue. No free lunch it seems.
Those drawbacks are happeneing because some engines lack motion vectors for some elements of particles or 2d elements, when engines integrate better support for DLSS these issues will disappear.
 

Gideon

Golden Member
Nov 27, 2007
1,028
1,684
136
It looks like Sony is also working on Deep Learning based image reconstruction:

 

Thala

Golden Member
Nov 12, 2014
1,095
430
136
It looks like Sony is also working on Deep Learning based image reconstruction:

Uh no, this is rather about reconstructing an object based on a plurality of images of that object. So it is related to scanning an object with a camera. That this shows up on a headline is rather a matter of Android Central's lack of reading comprehension.
 
  • Like
Reactions: DXDiag and Krteq

guidryp

Senior member
Apr 3, 2006
444
292
136
DLSS is the same technology as MicroSoft's DirectML , which every DX12 graphics card can use.
Direct ML will be used both in consoles (XBOX SX) and desktop from both AMD and NVIDIA.
I will not be surprised if NV will stop using DLSS in favor of DirectML in the future since Direct ML will see wider application in most of the future games for both Consoles and PC.

No.

DirectML is an API for running Neural Nets.

The NVidia Equivalent to DirectML would be Deep Learing APIs for CUDA:

DLSS is an application that runs on those APIs, not alternative API itself. It could theoretically be ported to DirectML. But I don't see that happening anytime soon, and NVidia put a lot of research getting DLSS 2.0 to where it is, and probably sees it as a competitive advantage.
 
  • Like
Reactions: GodisanAtheist

AtenRa

Lifer
Feb 2, 2009
13,492
2,397
126
No.

DirectML is an API for running Neural Nets.

The NVidia Equivalent to DirectML would be Deep Learing APIs for CUDA:

DLSS is an application that runs on those APIs, not alternative API itself. It could theoretically be ported to DirectML. But I don't see that happening anytime soon, and NVidia put a lot of research getting DLSS 2.0 to where it is, and probably sees it as a competitive advantage.
Not being an API doesnt mean they dont use same technologies.
Yes DirectML can do more things being an API but they both use same technologies to do AI upscaling and AA.
 

Tup3x

Senior member
Dec 31, 2016
419
264
106
Not being an API doesnt mean they dont use same technologies.
Yes DirectML can do more things being an API but they both use same technologies to do AI upscaling and AA.
Apples to oranges. I don't see how DirectML has anything to do with DLSS. It's a tool to achieve a solution while DLSS is a solution.
 
  • Like
Reactions: guidryp

guidryp

Senior member
Apr 3, 2006
444
292
136
Crosspost from the RDNA2 thread, in attempt to stop derailing that thread with DLSS discussions. This seems like a decent already existing thread for DLSS discussuions.

DLSS is only implemented in games with TAA, because it is a more advanced TAA with a motion vector stage for image recombination, this part runs NV's ML algo to determine how best to reconstruct the new image based on the prior image & motion vector samples. This processing of motion vector is the reason for performance penalty, typically 1.4 to 2ms depending on the available GPU power.

If devs use another form of AA, you can't have DLSS. Sadly, TAA is usually terrible so that's why so many gamers get fooled into believing DLSS is some magic "looks better than native".
This really isn't the case.

TAA is essentially the standard AA method in most modern game engines. It's the best bang/buck. You would probably have a hard time finding a modern Triple-A game without TAA. Typically, even when there are multiple options it tends to be the best of what is present. So really not much concern about games lacking TAA today.

DLSS doesn't depend on TAA, but it has similar setup. So if the game engine or developers already did the work of implementing TAA, then it isn't much more work to do something similar for DLSS 2.0.

For those interested in more technical DLSS 2.0 information, I found this really good DLSS 2.0 video, that looks like it was taken from NVidia developer info (so might get taken down at some point):

There is a fair bit of basic preamble, so I time-stamped it to ~14 mins where it starts getting into the more interesting details:


This really is the best technical detail I have seen on DLSS 2.0 and what it takes to integrate into game engines.

Edit: Engine Integration discussion starts at: 40:50.
 
Last edited:

Shivansps

Diamond Member
Sep 11, 2013
3,062
736
136
What is likely going to happen here is that AMD will come up with a solution using DirectML+Radeon Image Sharpening.
DirectML is already used for upscaling, in fact one of the DirectML samples is a video upscaler.

Everyone will call that "good enoght" and DLSS will be used only on titles sponsored by Nvidia.
 

guidryp

Senior member
Apr 3, 2006
444
292
136
What is likely going to happen here is that AMD will come up with a solution using DirectML+Radeon Image Sharpening.
DirectML is already used for upscaling, in fact one of the DirectML samples is a video upscaler.

Everyone will call that "good enoght" and DLSS will be used only on titles sponsored by Nvidia.
That will just give you something equivalent to DLSS 1.0, which probably won't be good enough.

AMD will need to have an actual reconstruction technique, based on ties into the game engine if it is going to get close to DLSS 2.0 quality. At which point it is then just as much work to implement the AMD solution, and thus unlikely to gain any more popularity.

But if the solutions work using the same engine inputs. Game engines can just abstract a derived interface, and support both with the same game code, (after some time, MS might adopt an official Reconstruction scaling API).
 

Dribble

Golden Member
Aug 9, 2005
1,838
377
136
What is likely going to happen here is that AMD will come up with a solution using DirectML+Radeon Image Sharpening.
DirectML is already used for upscaling, in fact one of the DirectML samples is a video upscaler.

Everyone will call that "good enoght" and DLSS will be used only on titles sponsored by Nvidia.
AMD need two things:
1) The hardware. Without some specialised hardware they will not have the performance to run a DLSS like machine learning program fast enough on their gpu's.
2) The software. The DLSS program that's running on end users' gpu's was generated by several years of research using AI super computers.

Without both of those a DLSS equivalent isn't possible.
 

Mopetar

Diamond Member
Jan 31, 2011
4,850
1,141
136
Since DLSS relies on prior training done off GPU that can be used by a GPU is there anything stopping AMD from piggybacking off of NVidia's work assuming they have some hardware that can perform the necessary calculations sufficiently fast?

It's not as though NVidia is doing something indistinguishable from magic. All that's truly necessary is software that can understand how to apply the training to an image. Obviously that's not trivial, but it seems possible. Legal hurdles seem more of a problem than anything else.
 

guidryp

Senior member
Apr 3, 2006
444
292
136
Since DLSS relies on prior training done off GPU that can be used by a GPU is there anything stopping AMD from piggybacking off of NVidia's work assuming they have some hardware that can perform the necessary calculations sufficiently fast?
Back when the Trained Network was unique to each game, it might have been part of the game package.

But now that the Trained Network is generic for all games, it likely resides in NVidia drivers.

In either case, it's their proprietary trained network so doubtful it's free for AMD to utilize.
 

beginner99

Diamond Member
Jun 2, 2009
4,625
1,009
136
Since DLSS relies on prior training done off GPU that can be used by a GPU is there anything stopping AMD from piggybacking off of NVidia's work assuming they have some hardware that can perform the necessary calculations sufficiently fast?

It's not as though NVidia is doing something indistinguishable from magic. All that's truly necessary is software that can understand how to apply the training to an image. Obviously that's not trivial, but it seems possible. Legal hurdles seem more of a problem than anything else.
Even if nothing of this is patented and theoretically reproducible from reading publication AMD would still need a team of AI experts to gain experience in this and then implemented it. Such experts easily cost >200k in US. Multply that by 5 for a team and several years. No it isn't easy or cheap.
 

Dribble

Golden Member
Aug 9, 2005
1,838
377
136
Since DLSS relies on prior training done off GPU that can be used by a GPU is there anything stopping AMD from piggybacking off of NVidia's work assuming they have some hardware that can perform the necessary calculations sufficiently fast?

It's not as though NVidia is doing something indistinguishable from magic. All that's truly necessary is software that can understand how to apply the training to an image. Obviously that's not trivial, but it seems possible. Legal hurdles seem more of a problem than anything else.
The end result is a program which is very much protected in law. You can't just rip off other companies software, you'll get sued fast. E.g Just because Apples software runs on x86 doesn't mean that Microsoft is allowed to nick bits of to use in Windows.

I am sure this will all get standardized eventually, but that might take a while. Perhaps even until the next round of consoles where if AMD provide the hardware someone like Microsoft will then have the incentive to write the software, and being Microsoft they'll make it a Windows standard.
 

DJinPrime

Member
Sep 9, 2020
47
54
46
The software is not the problem, DLSS on cuda could be ported to Direct ML. What's important is actually the process. There's 3 basic components when training an AI Neural Network;
1) The inputs. In DLSS 1.0, they were using the actual game. For obvious reason, this is not ideal. With DLSS 2.0, NV is using some generic sets of graphical data. Trade secret. Would they share? Probably not.
2) The Neural Network, the program/algorithm that will do the image reconstruction. With a given input image, this program will create a new image. This is what's being trained. Again, would they share this? Probably not.
3) The expected result, so with a given input and the current NN, some result will be generated. They compare this with the expected result and continue to tweak the Neural Network until the output is what they want it to be. Again, would they share? Probably Not.

So, once the NN is trained to the level that they want. The model is downloaded in the NV driver. With a supported game, the model will be used at real time to generate the image. You need fast hardware to do this, that's what the tensor cores are for. You can run this on anything, CPU even, but it's going to be slow and therefore defeats the purpose. Can you hack the NV driver and steal the model? Probably, but it's illegal.

We don't know what sorts of support will RDNA 2 have for tensors, since AMD haven't leaked much info, so we don't know how well AMD hardware will perform with something like DLSS. Also, the training would be constant. So, will AMD put in the effort and cost of having to do this? I think Microsoft would be more likely to do this than AMD, if the new console have sufficient hardware to make this viable.

All the info for NV is actually given in their DLSS 2.0 page https://www.nvidia.com/en-us/geforce/news/nvidia-dlss-2-0-a-big-leap-in-ai-rendering/

One more thing, and this is the awesome part, DLSS 2.0 is using 16K image to train their NN. That's why the result is so good, because what they're training with is better than the actual game assets. So, it's not your imagination, sometime the DLSS 2.0 image can be more clear than the native output. the DLSS 2.0 NN is just able to construct an image with more detail than the asset included with the game. Especially if the game asset themselves aren't that high res. Yeah, it's crazy sounding, but that's what's cool about AI and that's why it's blowing up in all sorts of industries. When it works, it's scary how good they are.
 
  • Like
Reactions: Tlh97 and Qwertilot

AtenRa

Lifer
Feb 2, 2009
13,492
2,397
126
Apples to oranges. I don't see how DirectML has anything to do with DLSS. It's a tool to achieve a solution while DLSS is a solution.
I said they both using same technologies (Convolution, Inferencing etc) to upscale and Anti Aliasing. Just because one is an API doesnt change the fact that they both does the same thing using same technologies.

For those that haven't watched it before, here I will post the video again bellow,

You can see DirectML upscaling using Convolution (same as DLSS 2.0 ) by using a NVIDIA trained model almost two years before NVIDIA released DLSS 2.0.

 
Thread starter Similar threads Forum Replies Date
Krteq Graphics Cards 5

ASK THE COMMUNITY