• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

[ Digital Trends ] Radeon Image Sharpening competes with DLSS

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
getting much better and welcome addition for games that use some form of RT. I know for future games like CyberPunk 2077, I’ll happily enable DLSS if I can use RT and get OK performance.

Why not just 1800p and get the same performance?

You don't even need upscaling. Just make sure your games run at a certain resolution. I think it was hardware unboxed that did a good review about it and said 1800p = 4K DLSS

By next year, we should see next gen Nvidia cards. Say it boosts performance by 25% normally and 40% with Ray Tracing on.

Also, you should play around with RT settings. Don't need to have Ultra with massive loss, when High/Medium and even Low might work just as well.
 
Why not just 1800p and get the same performance?

You don't even need upscaling. Just make sure your games run at a certain resolution. I think it was hardware unboxed that did a good review about it and said 1800p = 4K DLSS

By next year, we should see next gen Nvidia cards. Say it boosts performance by 25% normally and 40% with Ray Tracing on.

Also, you should play around with RT settings. Don't need to have Ultra with massive loss, when High/Medium and even Low might work just as well.
It's going to depend on the engine. Most game engines aren't really set up properly for ideal NN training yet.


This is probably why the quality of DLSS is varying so much between games so far. This along with the lack of DirectML support seems to have signifigantly gimped DLSS on the quality side overall. A big part of Nvidia's recent RTX partnership efforts with the big game engines seems to have included better engine level optimizations for NN training, so it will be interesting to see what DLSS looks like in upcoming titles when it's more ideally supported.
 
You'll never get the full quality because its working from data that doesn't exist.

The biggest problem with DLSS is not the quality is not up to par, but the performance loss is roughly equal to loss in quality when compared to simple resolution change. Like Hardware Unboxed said. You get 1800p quality and 1800p performance.

So what have they gained by using Tensor Cores and wasting data center processing with DLSS? Nothing.

Ahh, what am I doing? The OP is basically trolling with the post that has little to do with the title.
 
You'll never get the full quality because its working from data that doesn't exist.

The biggest problem with DLSS is not the quality is not up to par, but the performance loss is roughly equal to loss in quality when compared to simple resolution change. Like Hardware Unboxed said. You get 1800p quality and 1800p performance.

So what have they gained by using Tensor Cores and wasting data center processing with DLSS? Nothing.

Ahh, what am I doing? The OP is basically trolling with the post that has little to do with the title.
But it should be better - that's the laws of conservation of information. A simple upscale has to work with a simple algorithm that doesn't know anything about what's been drawn. DLSS should be like a highly compressed instruction manual on how to upscale for a particular game, not just a dumb one size fits all. I am not saying it's there yet, but that has to be the end goal.
 
You'll never get the full quality because its working from data that doesn't exist.

The biggest problem with DLSS is not the quality is not up to par, but the performance loss is roughly equal to loss in quality when compared to simple resolution change. Like Hardware Unboxed said. You get 1800p quality and 1800p performance.

So what have they gained by using Tensor Cores and wasting data center processing with DLSS? Nothing.

Ahh, what am I doing? The OP is basically trolling with the post that has little to do with the title.

Simple upscaling almost always use bandlimited interpolation - so higher frequencies are simply not there. The idea behind DLSS is to at least guess the higher frequencies. Thats as far as the theory goes.
 
These are intriguing technologies. They could turn out to be bad / never supported, but let's hope for the best.
 

Confirms RIS is superior overall to DLSS in practice.

But it's extremely stupid of AMD not to support DX11, especially since this should be an API-agnostic postfilter.

-AMD's GPU division motto over the last several launches has been "snatching defeat from the jaws of victory" so it should really not surprise anyone...
 
Back
Top