• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Disappointed in Nvidia DSR - introduces blur

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Anyone try games even older than Crysis 1? Like Halo 2 PC or any other really low-end DX9 game that is starting to look really long in the tooth?
 
this has to be much more than SSAA. some people say that SSAA x4 on 1080 screen is the same as running 4k. that is NOT true at as even 3200x1800 is was more demanding.

anyway I just tested Tomb Raider:

1920x1080 119 fps

1920x1080 with 4x SSAA 58 fps

3840x2160 DSR 38 fps

so as you can see DSR has the same performance impact as really running 4k.
 
Anyone try games even older than Crysis 1? Like Halo 2 PC or any other really low-end DX9 game that is starting to look really long in the tooth?
why though? really old games are so ugly and outdated looking that it wont really matter.
 
this has to be much more than SSAA.
There's a Gaussian blur post-filter being applied to the result. Also from Fermi onwards there's hardware accelerated jitter, but this method can't take advantage of that.

The advantage this method has is that it works in practically any 3D game.

really old games are so ugly and outdated looking that it wont really matter.
Nonsense.
 
There's a Gaussian blur post-filter being applied to the result. Also from Fermi onwards there's hardware accelerated jitter, but this method can't take advantage of that.

The advantage this method has is that it works in practically any 3D game.

Nonsense.
well I have my opinion and you have yours. I don't give a rat's behind about running a 15 year old game at 4k.
 
well I have my opinion and you have yours. I don't give a rat's behind about running a 15 year old game at 4k.
Even a game using 1996 artwork (Descent 2) exhibits visible texture aliasing in places without SSAA.

With that said, this method isn't needed for older games, it's for new games that need the compatibility.
 
Doesn't BF4 have a rendering resolution option in the graphics options anyways?

Sure, but I dig the gaussian filter. I reject the idea that sharper always = better. There's more than one valid way to downscale an image, traditional SSAA just averages the results per pixel. Here's the difference demonstrated quite well by an image ripped from wikipedia:

Gaussian_blur_before_downscaling.png


Even just in a still image you can see how imperfect normal SSAA can be. There's clear moire on the ridges. Some lines are thicker and blurrier than others. And in motion those pixels will crawl like crazy. Sure, the other image is softer, but IMO its a less distracting and more eye pleasing representation.
 
There's a Gaussian blur post-filter being applied to the result. Also from Fermi onwards there's hardware accelerated jitter, but this method can't take advantage of that.

The advantage this method has is that it works in practically any 3D game.

Nonsense.

Just to be clear, they're not blurring the "result". They blur the higher res image before downscaling.
 
NV's DSR uses Gaussian filter which will blur, not sure why they went with that approach.
Because it is better than box filter?
This is true especially when using moving image.

It also allowed them to have smoothness control for people who dislike the blur the filter causes.
I think SSAA looks like garbage in Tomb Raider as it blurs more than FXAA.
Use negative mipmap bias which gives sub-pixel frequency to textures.
This combined with SSAA one can have shading properly done in sub-pixel level and gives more information than normal MSAA and doesn't blur image.

If UI or text is blurred it depends on how it's scaled to fit screen, mpimap bias should help to this as well. (if UI is uses nearest point filtering and texture fitted to resolution, it will not blur with SSAA.)
 
Last edited:
I don't see the point of DSR. At all. If you are paying the performance penalty for initially rendering at a higher resolution, just get a higher resolution and play at that resolution with out the down scaling.

There is for those of us playing on 1080p TVs. I use it in games that support it in the engine like Arma 3 or BF4.
 
Really trying to understand this mentally.

I'm imagining a diagonal black line against a white background. Let's say that at 1080p up close you can see obvious stair stepping. So when rendered at 4k now there are black pixels close to the black line and white pixels close to the background and now there are gray pixels in between those.

So this reduces the perception of stair stepping from that same distance of view. Now we have to downsample this back to 1080p.

In my understanding this is mostly a matter of finding shades of grey for all those pixels at the border in a way that the diagonal line does not look like a distinct staircase but now a blurred but smoother diagonal line.

So I'm having a tough time figuring out what the Gaussian filter applied to the 4k render is doing and what purpose it serves.
 
Really trying to understand this mentally.

I'm imagining a diagonal black line against a white background. Let's say that at 1080p up close you can see obvious stair stepping. So when rendered at 4k now there are black pixels close to the black line and white pixels close to the background and now there are gray pixels in between those.

So this reduces the perception of stair stepping from that same distance of view. Now we have to downsample this back to 1080p.

In my understanding this is mostly a matter of finding shades of grey for all those pixels at the border in a way that the diagonal line does not look like a distinct staircase but now a blurred but smoother diagonal line.

So I'm having a tough time figuring out what the Gaussian filter applied to the 4k render is doing and what purpose it serves.

You should read through this: http://www.beyond3d.com/content/articles/122/3

There are different types of aliasing, and not all types are helped by the same type of AA.
 
Really trying to understand this mentally.

I'm imagining a diagonal black line against a white background. Let's say that at 1080p up close you can see obvious stair stepping. So when rendered at 4k now there are black pixels close to the black line and white pixels close to the background and now there are gray pixels in between those.

So this reduces the perception of stair stepping from that same distance of view. Now we have to downsample this back to 1080p.

In my understanding this is mostly a matter of finding shades of grey for all those pixels at the border in a way that the diagonal line does not look like a distinct staircase but now a blurred but smoother diagonal line.

So I'm having a tough time figuring out what the Gaussian filter applied to the 4k render is doing and what purpose it serves.

It's to prevent artifacts. If you have BF4, there's a scene where you're overlooking a dam, and there's a chain link fence far in the distance. At normal res, it doesnt look so bad. But If you use DSR with low smoothness, or even just the built in SSAA, it produces MAJOR moire artifacts. Bump up the smoothness and it goes away, but everything gets softer.

It's especially useful in situations where you're already running a screen that's difficult to see individual pixels on...in that case, the blur is much less noticeable than the artifacts.

Remember, it's an adjustable strength filter. Tuned to 25% it produces no more blur than a typical box filter, yet retains some of its artifact preventing quality. So it's a win and superior to what we had before however you look at it. Beyond that, you have the choice to add additional blur to prevent some artifacts at the cost of sharpness. In that sense, it's no different than normal antialiasing, which trades off performance to decrease artifacts. If the only thing that mattered was sharpness, we wouldn't run AA at all.
 
Last edited:
Really trying to understand this mentally.

I'm imagining a diagonal black line against a white background. Let's say that at 1080p up close you can see obvious stair stepping. So when rendered at 4k now there are black pixels close to the black line and white pixels close to the background and now there are gray pixels in between those.

So this reduces the perception of stair stepping from that same distance of view. Now we have to downsample this back to 1080p.

In my understanding this is mostly a matter of finding shades of grey for all those pixels at the border in a way that the diagonal line does not look like a distinct staircase but now a blurred but smoother diagonal line.

So I'm having a tough time figuring out what the Gaussian filter applied to the 4k render is doing and what purpose it serves.

You don't suddenly get grey pixels when rendering at 4k, it's still just white and black pixels, the only difference is that the stair steppings are more fine grained and thus less noticable.

To get grey pixel you have to do some sort of averaging (or blurring if you like), which is what all AA methods do (using varying algorithms).

When downsampling from 4k to 1080P you also have to use some sort of averaging algorithm and that's where the gaussian filter comes in to play.
 
The Gaussian filter produces an effect very similar to SGSSAA, since it incorporates pixel data from a wider spread. I tested 4X SGSSAA vs 4X DSR yesterday, and both were good in their own way. SGSSAA did a lot better job reducing temporal aliasing, but there was a clear loss of detail, especially in the distance. DSR reduced the temporal aliasing to a lesser degree, but retained all of the detail.

I didn't try it yet, but I bet using both in combination would be the best of both worlds, but it's kind of academic anyway since SGSSAA doesn't work in DX10+.
 

Nothing in that link disputed what I wrote.

All it said was sometimes OGSSAA works better and sometimes SGSSAA is better. It said sometimes one or the other was blurrier, but no where did it dispute that SGSSAA was more demanding (it is), or does it say OGSSAA was different than downsampling (the result is the same or very close to it).

Ordered Grid SSAA simply leaves the pixels at higher res in perfect grid of 4 pixels, just like 4k is vs 1080p. Sparse Grid SSAA, takes those 4 pixels and rotates them a little, to get a smoother result, or at least it is supposed to, but results in more computation.

Edit: It seems I got that backwards, at least according to this: http://www.dahlsys.com/misc/antialias/
It is RGSSAA that is more demanding and is rotated. Apparently SGSSAA compromises a bit on what it aliases.
 
Last edited:
That was a great explanation.
Yes, it is.
Sadly they didn't have a bigger chunk on tent filters or more on better reconstruction than box filter.
http://mynameismjp.wordpress.com/2012/10/28/msaa-resolve-filters/

One of the big reasons to use wider than pixel reconstruction/resolve is a temporal stability.
I have tried to find a proper example animation of the subject, but couldn't find it. (pixel wide text going up/down and flickering like crazy with box filter.)

edit:
added nice link on sample patterns etc.,
http://mynameismjp.wordpress.com/2012/10/24/msaa-overview/
 
Last edited:
Back
Top