Discussion AMD Gaming Super Resolution GSR

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DisEnchantment

Golden Member
Mar 3, 2017
1,608
5,816
136
New Patent came up today for AMD's FSR




20210150669
GAMING SUPER RESOLUTION

Abstract
A processing device is provided which includes memory and a processor. The processor is configured to receive an input image having a first resolution, generate linear down-sampled versions of the input image by down-sampling the input image via a linear upscaling network and generate non-linear down-sampled versions of the input image by down-sampling the input image via a non-linear upscaling network. The processor is also configured to convert the down-sampled versions of the input image into pixels of an output image having a second resolution higher than the first resolution and provide the output image for display


[0008] Conventional super-resolution techniques include a variety of conventional neural network architectures which perform super-resolution by upscaling images using linear functions. These linear functions do not, however, utilize the advantages of other types of information (e.g., non-linear information), which typically results in blurry and/or corrupted images. In addition, conventional neural network architectures are generalizable and trained to operate without significant knowledge of an immediate problem. Other conventional super-resolution techniques use deep learning approaches. The deep learning techniques do not, however, incorporate important aspects of the original image, resulting in lost color and lost detail information.

[0009] The present application provides devices and methods for efficiently super-resolving an image, which preserves the original information of the image while upscaling the image and improving fidelity. The devices and methods utilize linear and non-linear up-sampling in a wholly learned environment.

[0010] The devices and methods include a gaming super resolution (GSR) network architecture which efficiently super resolves images in a convolutional and generalizable manner. The GSR architecture employs image condensation and a combination of linear and nonlinear operations to accelerate the process to gaming viable levels. GSR renders images at a low quality scale to create high quality image approximations and achieve high framerates. High quality reference images are approximated by applying a specific configuration of convolutional layers and activation functions to a low quality reference image. The GSR network approximates more generalized problems more accurately and efficiently than conventional super resolution techniques by training the weights of the convolutional layers with a corpus of images.

[0011] A processing device is provided which includes memory and a processor. The processor is configured to receive an input image having a first resolution, generate linear down-sampled versions of the input image by down-sampling the input image via a linear upscaling network and generate non-linear down-sampled versions of the input image by down-sampling the input image via a non-linear upscaling network. The processor is also configured to convert the down-sampled versions of the input image into pixels of an output image having a second resolution higher than the first resolution and provide the output image for display.

[0012] A processing device is provided which includes memory and a processor configured to receive an input image having a first resolution. The processor is also configured to generate a plurality of non-linear down-sampled versions of the input image via a non-linear upscaling network and generate one or more linear down-sampled versions of the input image via a linear upscaling network. The processor is also configured to combine the non-linear down-sampled versions and the one or more linear down-sampled versions to provide a plurality of combined down-sampled versions. The processor is also configured to convert the combined down-sampled versions of the input image into pixels of an output image having a second resolution higher than the first resolution by assigning, to each of a plurality of pixel blocks of the output image, a co-located pixel in each of the combined down-sampled versions and provide the output image for display.

[0013] A super resolution processing method is provided which improves processing performance. The method includes receiving an input image having a first resolution, generating linear down-sampled versions of the input image by down-sampling the input image via a linear upscaling network and generating non-linear down-sampled versions of the input image by down-sampling the input image via a non-linear upscaling network. The method also includes converting the down-sampled versions of the input image into pixels of an output image having a second resolution higher than the first resolution and providing the output image for display.

It uses Inferencing for upscaling. As will all ML models, how you assemble the layers, what kind of parameters you choose, which activation functions you choose etc, matters a lot, and the difference could be night and day in accuracy, performance and memory

1621500232759.png

1621500205339.png
 
Last edited:

Timorous

Golden Member
Oct 27, 2008
1,624
2,790
136
At this point I'm convinced he's doing this intentionally. I refuse to believe he's actually dumb enough to not notice these issues with his comparisons.

I think I will Hanlon's Razor it and assume he is just not as good as people claimed.

EDIT. I still think he went looking for images to backup his pre test conclusion, he just was not very good at it.
 
  • Like
Reactions: Tarkin77

Tup3x

Senior member
Dec 31, 2016
965
951
136
- To your first point, I fully expect DLSS 3.0 to be able to run on shaders, with a possible locked "ultra quality/performance" preset for cards with Tensor cores (as the tensor cores supposedly would process whatever algos faster than shaders). It would be crazy for NV not to at this point, as they've gotten a 3 year competitor free return on the tech and they know they'll have to go after FSR hard and fast to stop it from taking too deep a root.

To your second point, I fully expect FSR 2.0 to look more like Unreal Engine's TAAU by incorporating temporal data in addition to what it currently does to output a higher quality image. That's really the one big gaping hole in the tech at the moment, and it would help a lot of engines level the playing field with Unreal Engine on this front.
As it stands right now FSR and DLSS are two slightly different things. FSR is scaler while DLSS is temporal anti aliasing solution that can also reconstruct from lower resolution to higher. FSR doesn't do anti aliasing. That potential FSR 2.0 would be total rewrite.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
6,826
7,190
136
They will not be stopping FSR, AMD powers both xbox and playstation, and any multiplatform game is going to be using it because of that. Nvidia will pay to have their tech used exclusively in some PC ports probably, but that won't stop the momentum. The most likely scenario is that it plays out like the adaptive sync battle. The free solution dominates, but Nvidia will support it or a similar solution, as you have pointed out, while having a better exclusive version with a price tag.

- To be fair, DLSS might be the holy grail in porting games to the wildly popular switch (and I believe some games have announced support for it on the switch already) so there is definitely plenty of incentive for devs to look at both solutions, especially if something like DLSS is integrated at the engine level like it is in UE5.

No reason FSR couldn't work there as well, but NV hasn't exactly been known for playing fair when you're playing in their walled garden...
 
  • Like
Reactions: uzzi38 and Makaveli

coercitiv

Diamond Member
Jan 24, 2014
6,213
11,954
136
Apparently Alex applied CAS onto his "native" and TAAU comparison points, even on the updated review.
"simple scaling" :cool:

PS: I don't have a problem with him using Native+CAS to benchmark against FSR, but why not be open about it?
 
Last edited:

uzzi38

Platinum Member
Oct 16, 2019
2,635
5,984
146
"simple scaling" :cool:

PS: I don't have a problem with him using Native+CAS to benchmark against FSR, but why not be open about it?
Here's my issue with using Native+CAS comparisons: you're assuming the end user is competent enough to sit down and optimise the specific sharpening intensity that they would like to game with.

That's not indicative of what your average end-user will do.

Not to mention, if you're going to do that, why not do the same for FSR using driver level implementations of sharpening? Might as well fine-tune every implementation at that point. Unless Alex wasn't fine tuning but rather guessing how much sharpening is applied, which is perhaps even worse because you have no clue if you're providing a like-for-like comparison.
 
  • Like
Reactions: Tlh97 and Elfear

Saylick

Diamond Member
Sep 10, 2012
3,172
6,409
136
If nothing else it will clarify Nvidia's PR regarding the tech DLSS uses. So far DLSS is limited to RTX cards due to requiring Tensor Cores. Switch doesn't have Tensor Cores.
If what this guy says is true, DLSS 2.0 doesn't use deep learning to upsample individual images themselves (i.e. DLSS 1.0); it's using a trained network to determine which frames to select in the multi-frame upsampling. Regardless, my understanding is that inferencing doesn't require tensor cores but having a lot of TOPs does help, so a bank of FP units which can do a ton of half-precision or even quarter-precision operations may suffice. I don't know how deep the neural network designed to select the sample frames needs to be, but I imagine it cannot be too deep. At the end of the day, deep learning is just used to determine which frames to select, but the multi-frame upsampling portion of the pipeline still needs to run as well. I imagine if the former took too long, it may eliminate any advantages it brings in accuracy/final image quality vs. just using all of the previous sample frames.

 

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
Ok a few pics from me


RX5500XT 8GB

Riftbreaker

Native 1440p


1440p + FSR Performance


1440p + FSR Balanced


1440p + FSR Quality


1440p + FSR Ultra Quality

.
.
Virtual Super Resolution to 5120x2880

Native VSR 5120x2880

Native VSR 5120x2880 + FSR Performance


Native VSR 5120x2880 + FSR Balanced


Native VSR 5120x2880 + FSR Quality


Native VSR 5120x2880 + FSR Ultra Quality



If you want the best Image Quality with acceptable fps, then turn on VSR + FSR Performance. Just compare from the images above the Native 1440p to 5120x2880 + FSR Performance.
 
Last edited:

Gideon

Golden Member
Nov 27, 2007
1,646
3,712
136
Played around with FSR in Dota. It looks gorgeous, particularily when using it at 99,5% scaling or with VSR.

I ended up playing with VSR 2880p with 75% scaling (4K actual rendering resolution) with dowscaling to 1440p.

I chose that as this lands my FPS closest to my monitors refresh rate (165Hz). Dota tends to have rough edges on some vegetation and clothing etc (like zeus' robe on the image) this managed to totally eliminate it.

Unfortunately I can't share the PNG images as they are very slightly over 32mb and no service lets me upload those ,but this is how it looks like as a jpg:

k6PZYZb.jpeg

SKuv2O3.jpg
 
Last edited:

Krteq

Senior member
May 22, 2015
991
671
136
Regarding TAAU, it IS indeed breaking DoF... even in UE5.
Out of curiosity, will the new TAA upscaling behave well with depth of field? Currently when you set r.TemporalAA.Upsampling=1 , most of the DOF just disappears.
So when r.TemporalAA.Upsampling=1, it basically forces r.DOF.Recombine.Quality=0 that looses the slight DOF convolution, and that is due to DiaphragmDOF.cpp’s bSupportsSlightOutOfFocus. There needs to have some changes in the handling of the slight out of convolution (about 5pixels and below) when doing temporal upsampling that I didn’t have time to come down to. And we were only using temporal upsampling on current gen consoles. Wasn’t a big deal back then because if your frame would need to be temporally upsampled, that probably meant you didn’t have the performance to run DOF’s slight out of focus… However we exactly ran into this issue for our Lumen in the Land of Nanite demo running on PS5, but it is still prototype and I’m not sure whether I’m gonna have this finished by 4.26’s release. But yeah given how temporal upsampling is going to become important, it’s definitely something to fix very high on the priority list.

forums.unrealengine.com - GEN 5 TEMPORAL ANTI-ALIASING
 
Last edited:

uzzi38

Platinum Member
Oct 16, 2019
2,635
5,984
146
Regarding TAAU, it IS indeed breaking DoF... even in UE5.


forums.unrealengine.com - GEN 5 TEMPORAL ANTI-ALIASING
If I'm reading that post correctly, it's probably fixed in UE5 now, but it was broken in UE4, Epic Games were aware of it but due to the fact that TAAU was primarily only being used on consoles where DoF generally wasn't being applied, they just left it be.

At the end of the day, it's the same result - DoF was broken no matter how you cut it, and the supposed industry expert on image clarity didn't just miss it once, but they missed it twice.

Laughable.
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
New video from Hardware Unboxed (DOTA 2) with iGPU and older entry level dGPUs like the GTX750 Ti, RX550 and GT1030 DDR4

Not a great example as you can run DOTA on anything fine. However as someone said previously the fact FSR is more effective at 1440p probably says more about the quality of the assets in DOTA then it does about how good FSR is. DOTA is clearly built for 1080p, if it was built for 1440p (higher quality assets) then FSR would have exactly the same increase in blurriness as you currently see at 1080p - the % reduction FSR uses is identical after all.
 
  • Like
Reactions: Tlh97

Kedas

Senior member
Dec 6, 2018
355
339
136
if game developers do not add FSR in their games it's because they don't want to.
Even modders where able to add it in Grand Theft Auto V
It's not perfect but indicates the amount of effort needed to get it done.