Discussion AMD Gaming Super Resolution GSR

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Krteq

Senior member
May 22, 2015
989
670
136
FSR/GSR implemented into OpenVR

GitHub - fholger/openvr_fsr

side-by-side screenshots (imgsli):
Important disclaimer

This is a best-effort experiment and hack to bring this upscaling technique to VR games which do not support it natively. Please understand that the approach taken here cannot guarantee the optimal quality that FSR might, in theory, be capable of. AMD has specific recommendations where and how FSR should be placed in the render pipeline. Due to the nature of this generic hack, I cannot guarantee nor control that all of these recommendations are actually met for any particular game. Please do not judge the quality of FSR solely by this mod :)
 

Mopetar

Diamond Member
Jan 31, 2011
6,948
4,254
136
Seems like a bit of a mixed bag. I generally (not sure if the color difference is due to FSR or just the clouds/sun changing between pictures) think it makes Skyrim look better, but Fallout look worse. It doesn't seem to handle bare trees all that well, but then again it's a pretty limited set of images so I wouldn't want to draw too many conclusions from just the two of them.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
4,505
3,654
136
Not precisely on topic, but still relevant to the whole upscaling on AMD thing regardless:

 

uzzi38

Platinum Member
Oct 16, 2019
2,363
4,971
116
Not precisely on topic, but still relevant to the whole upscaling on AMD thing regardless:

I'm surprised there are any news articles on this at all, they've been talking about this for well over a year already, the whole ML-based upscaling via DirectML thing.
 

blckgrffn

Diamond Member
May 1, 2003
8,606
2,131
136
www.teamjuchems.com
Seems like a bit of a mixed bag. I generally (not sure if the color difference is due to FSR or just the clouds/sun changing between pictures) think it makes Skyrim look better, but Fallout look worse. It doesn't seem to handle bare trees all that well, but then again it's a pretty limited set of images so I wouldn't want to draw too many conclusions from just the two of them.
I think the other missing stat is the FPS difference. The trade off is important, IMO, because ugly trees can be weighed against FPS gains - especially minimum frames.

It's going to cost something when it comes to IQ.
 

Topweasel

Diamond Member
Oct 19, 2000
5,427
1,643
136
I think the other missing stat is the FPS difference. The trade off is important, IMO, because ugly trees can be weighed against FPS gains - especially minimum frames.

It's going to cost something when it comes to IQ.
Well that and what it looks like to get that perf on different settings. Like what do we have to do on 4k to get that performance (if you could) and how that looks and the same thing does 1440p or 1080p with all of the PQ options (or enough turned on to hit that perf) look better. Also Static shots don't really tell much of the story, once you add motionblur and and other in movement effects it will probably be a lot harder to pick out the differences. Linus just did a couple video's but basically only if you understand the tech and specifically look for signs of the tech running could you really notice in game.
 
  • Like
Reactions: Tlh97 and blckgrffn

AtenRa

Lifer
Feb 2, 2009
13,832
3,068
136
I was curious to see how FSR is looking at 1080p and what VSR can do to the IQ and the performance.
Enabling VSR at 1080p (Virtual Super Resolution) renders the image at 4k, so it is interesting to compare 1080p native vs VSR + Performance mode FSR.


So here are some pics from Risftbraker at 1080p. Again this is with a RX5500XT 8MB


1080p Native

1080p + FSR Performance

1080p + FSR Balance

1080p + FSR Quality

1080p + FSR Ultra Quality

.

3840x2160 VSR

3840x2160 VSR + FSR Performance

3840x2160 VSR + FSR Balance

3840x2160 VSR + FSR Quality

3840x2160 VSR + FSR Ultra Quality
 

Hitman928

Diamond Member
Apr 15, 2012
4,238
5,368
136
I was curious to see how FSR is looking at 1080p and what VSR can do to the IQ and the performance.
Enabling VSR at 1080p (Virtual Super Resolution) renders the image at 4k, so it is interesting to compare 1080p native vs VSR + Performance mode FSR.


So here are some pics from Risftbraker at 1080p. Again this is with a RX5500XT 8MB


1080p Native

1080p + FSR Performance

1080p + FSR Balance

1080p + FSR Quality

1080p + FSR Ultra Quality

.

3840x2160 VSR

3840x2160 VSR + FSR Performance

3840x2160 VSR + FSR Balance

3840x2160 VSR + FSR Quality

3840x2160 VSR + FSR Ultra Quality
4K VSR + FSR Performance looks like a win to me. Clearly higher IQ with minimal performance impact (assuming the fps shown in the screenshot is representative of relative performances). At 1080p, FSR UQ looks decent enough but the fps boost isn't really much. At FSR Q the fps boost starts to be significant but the quality takes a clear hit as well.

Edit: 4k + FSR Q looks pretty good too quality wise for a good bump in performance. Any chance you could grab some native 900p and/or 720p for comparison to the 1080p FSR?
 
  • Like
Reactions: Tlh97

AtenRa

Lifer
Feb 2, 2009
13,832
3,068
136
4K VSR + FSR Performance looks like a win to me. Clearly higher IQ with minimal performance impact (assuming the fps shown in the screenshot is representative of relative performances). At 1080p, FSR UQ looks decent enough but the fps boost isn't really much. At FSR Q the fps boost starts to be significant but the quality takes a clear hit as well.

Edit: 4k + FSR Q looks pretty good too quality wise for a good bump in performance. Any chance you could grab some native 900p and/or 720p for comparison to the 1080p FSR?
ready ;)

Just to point out those two pics are taken in window mode.

900p Native

720p Native
 

Hitman928

Diamond Member
Apr 15, 2012
4,238
5,368
136
ready ;)

Just to point out those two pics are taken in window mode.

900p Native

720p Native
Thanks. 1080p + FSR Balanced looks better to me than 720p native and has the same performance. 1080p + FSR UQ has the same performance as 900p and looks comparable to me. I think the advantage FSR has in all of these is the sharpening effect. Native + sharpening (e.g. CAS) would probably look better than native because of the bluriness which I'm guessing comes from the AA method.
 

moinmoin

Diamond Member
Jun 1, 2017
4,048
6,093
136
New upscaler coming from AMD?

Thomas Arcila, one of the two speakers and likely lead developer of it, is at AMD only since December. So I'd expect an outlook on the tech planned for use in a FSR 2.0 (or whatever name that project will eventually launch as) with the public launch still some time off.
 

Stuka87

Diamond Member
Dec 10, 2010
5,985
2,206
136
Not needing an AI pass is a huge thing if it matches DLSS 2.0. Its one less thing for developers to have to worry about. And working on any GPU means they have to do the work for multiple vendors.
 

Saylick

Platinum Member
Sep 10, 2012
2,169
3,848
136
Not needing an AI pass is a huge thing if it matches DLSS 2.0. Its one less thing for developers to have to worry about. And working on any GPU means they have to do the work for multiple vendors.
I suspect FSR 2.0 will need to use some sort of neural network to match DLSS 2.0, but it won't require matrix units to run the FMA instructions but instead just use the shader engine. Only reason why Nvidia runs it on their matrix units is because they simply have them available, and so they market it as being a requirement to get customers to buy their graphics cards that have tensor cores. As we have seen with Ampere, they doubled the TFLOPS but performance didn't scale proportionally. Navi 31 is supposed to triple the amount of FP32 throughput over N21, and a similar phenomenon might occur. Why not use some of the shaders to run FSR 2.0 if you're getting diminishing returns on your TFLOP scaling anyways?
 

Justinus

Platinum Member
Oct 10, 2005
2,932
1,123
136
I suspect FSR 2.0 will need to use some sort of neural network to match DLSS 2.0, but it won't require matrix units to run the FMA instructions but instead just use the shader engine. Only reason why Nvidia runs it on their matrix units is because they simply have them available, and so they market it as being a requirement to get customers to buy their graphics cards that have tensor cores. As we have seen with Ampere, they doubled the TFLOPS but performance didn't scale proportionally. Navi 31 is supposed to triple the amount of FP32 throughput over N21, and a similar phenomenon might occur. Why not use some of the shaders to run FSR 2.0 if you're getting diminishing returns on your TFLOP scaling anyways?
Ampere didn't scale up with the paper TFLOP spec because half the new "cuda cores" are units that must either only do fp or int, and the figure that came out was on average ~36% of gaming workloads are int. That means the effective TFLOPS for gaming performance is only potentially 64% of the paper figure.

RDNA3 shouldn't have any sort of tradeoff like that.
 
Last edited:

Frenetic Pony

Member
May 1, 2012
172
111
116
I suspect FSR 2.0 will need to use some sort of neural network to match DLSS 2.0, but it won't require matrix units to run the FMA instructions but instead just use the shader engine. Only reason why Nvidia runs it on their matrix units is because they simply have them available, and so they market it as being a requirement to get customers to buy their graphics cards that have tensor cores. As we have seen with Ampere, they doubled the TFLOPS but performance didn't scale proportionally. Navi 31 is supposed to triple the amount of FP32 throughput over N21, and a similar phenomenon might occur. Why not use some of the shaders to run FSR 2.0 if you're getting diminishing returns on your TFLOP scaling anyways?
Eh, DLSS 2.0 is the same TAA everyone uses with an AI step or two thrown in. Nvidia have been really, really loathe to mention this at all, they want their thing to seem like some sort of un-replicatable "magic" which is why they just keep pushing "AI" as a buzzword over and over and over again.

Maybe the AI is used for the old sample re-shading step, which is the real struggle with temporal upscaling. Because a bunch of pixels are the same as last frame, but should be changed this frame, you have to update them somehow based on the few new pixels you do have. Otherwise you get stuff like in the UE5 Matrix Demo where shadows and such "speckle ghost" over time as some pixels immediately receive new shading/shadows while others don't. Other than whatever neat trick they're doing there DLSS2 just matches the capabilities of a nice, very clean TAA setup.
 

Mopetar

Diamond Member
Jan 31, 2011
6,948
4,254
136
If you've got Navi 31 caliber card, why are you even running FSR/DLSS at all. Those should have plenty of horse power even with the newest titles at the most demanding resolutions.

For those cards it's only something to consider for when the card is several years old and can't keep up like it used to. Of course by then you'd hope it supports whatever newest version of FSR/DLSS is available.
 

Justinus

Platinum Member
Oct 10, 2005
2,932
1,123
136
If you've got Navi 31 caliber card, why are you even running FSR/DLSS at all. Those should have plenty of horse power even with the newest titles at the most demanding resolutions.

For those cards it's only something to consider for when the card is several years old and can't keep up like it used to. Of course by then you'd hope it supports whatever newest version of FSR/DLSS is available.
I think it's going to come down to performance. I plan to buy a 7900XT and I use a 27GP950B, a 4K160Hz monitor to game. Right now the most demanding current games end up running around 60-90 FPS (highest preset) on my 6900XT. If N31/7900XT does indeed deliver 2.5-2.7X performance increase, that would mean performance would be 150-240FPS in these titles. That absolutely would not require any form of upscaling or performance enhancement technique.

That being said there are some games that run like garbage even with settings turned down, and struggle to even achieve 60 FPS. I'm looking at you, control, cyberpunk, and medium. 2.5x scaling on those still wouldn't get me to monitor refresh rate.

While there have been some awful looking FSR and DLSS implementations there have also been some excellent ones. I can't tell the difference in Far Cry 6 between FSR Ultra Quality and FSR off. The better these techniques and implementations get, the more we will see consistent image quality and performance benefits.

I have a feeling with the jump in GPU power we're going to see a jump in even more usage of ray tracing and other high demand rendering techniques to sap some of that extra power.
 
Last edited:

GodisanAtheist

Diamond Member
Nov 16, 2006
4,505
3,654
136
I wonder if AMD figures that they can over-provision on general purpose compute units and use that overhead to drive a more active upscaling technique rather than NV's method of dedicating specialized hardware to it.

AMD has to be smarter and cagier with their die space than NV thanks to their core business being CPUs, not GPUs, and with MCM we may end up with way more shader power than we really need.

Alternatively, MCM may usher in SOC style GPUs with a io block, a tensor block, a shader block etc all small dies with absurdly high yields, highly customizable.
 

Frenetic Pony

Member
May 1, 2012
172
111
116
I wonder if AMD figures that they can over-provision on general purpose compute units and use that overhead to drive a more active upscaling technique rather than NV's method of dedicating specialized hardware to it.

AMD has to be smarter and cagier with their die space than NV thanks to their core business being CPUs, not GPUs, and with MCM we may end up with way more shader power than we really need.

Alternatively, MCM may usher in SOC style GPUs with a io block, a tensor block, a shader block etc all small dies with absurdly high yields, highly customizable.
Eh, RDNA2 can run ai workloads just fine. The compute units can split all the way down to quad rate int 8(or even 8x int4 but that's a bit low for precision), a popular format for deployable ai workloads. I don't think they necessarily need their own dedicated blocks for that "extra oomph" as they're flexible enough to run a lot of things on general compute units.

I expect the announcement of "no ai required" is due to developer interest. Developers want a single solution they can implement for everyone they're going to sell to; working on a feature that benefits everyone is a way better use of time than working on some high end feature only 15% of their userbase might even see for example.
 

Shamrock

Golden Member
Oct 11, 1999
1,395
497
136
Some more from RedGamingTech, he knows something, he chooses his words carefully. Also states its no longer spatial, but temporal upscaling.

 

ASK THE COMMUNITY