Discussion AMD Gaming Super Resolution GSR

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DisEnchantment

Golden Member
Mar 3, 2017
1,590
5,722
136
New Patent came up today for AMD's FSR




20210150669
GAMING SUPER RESOLUTION

Abstract
A processing device is provided which includes memory and a processor. The processor is configured to receive an input image having a first resolution, generate linear down-sampled versions of the input image by down-sampling the input image via a linear upscaling network and generate non-linear down-sampled versions of the input image by down-sampling the input image via a non-linear upscaling network. The processor is also configured to convert the down-sampled versions of the input image into pixels of an output image having a second resolution higher than the first resolution and provide the output image for display


[0008] Conventional super-resolution techniques include a variety of conventional neural network architectures which perform super-resolution by upscaling images using linear functions. These linear functions do not, however, utilize the advantages of other types of information (e.g., non-linear information), which typically results in blurry and/or corrupted images. In addition, conventional neural network architectures are generalizable and trained to operate without significant knowledge of an immediate problem. Other conventional super-resolution techniques use deep learning approaches. The deep learning techniques do not, however, incorporate important aspects of the original image, resulting in lost color and lost detail information.

[0009] The present application provides devices and methods for efficiently super-resolving an image, which preserves the original information of the image while upscaling the image and improving fidelity. The devices and methods utilize linear and non-linear up-sampling in a wholly learned environment.

[0010] The devices and methods include a gaming super resolution (GSR) network architecture which efficiently super resolves images in a convolutional and generalizable manner. The GSR architecture employs image condensation and a combination of linear and nonlinear operations to accelerate the process to gaming viable levels. GSR renders images at a low quality scale to create high quality image approximations and achieve high framerates. High quality reference images are approximated by applying a specific configuration of convolutional layers and activation functions to a low quality reference image. The GSR network approximates more generalized problems more accurately and efficiently than conventional super resolution techniques by training the weights of the convolutional layers with a corpus of images.

[0011] A processing device is provided which includes memory and a processor. The processor is configured to receive an input image having a first resolution, generate linear down-sampled versions of the input image by down-sampling the input image via a linear upscaling network and generate non-linear down-sampled versions of the input image by down-sampling the input image via a non-linear upscaling network. The processor is also configured to convert the down-sampled versions of the input image into pixels of an output image having a second resolution higher than the first resolution and provide the output image for display.

[0012] A processing device is provided which includes memory and a processor configured to receive an input image having a first resolution. The processor is also configured to generate a plurality of non-linear down-sampled versions of the input image via a non-linear upscaling network and generate one or more linear down-sampled versions of the input image via a linear upscaling network. The processor is also configured to combine the non-linear down-sampled versions and the one or more linear down-sampled versions to provide a plurality of combined down-sampled versions. The processor is also configured to convert the combined down-sampled versions of the input image into pixels of an output image having a second resolution higher than the first resolution by assigning, to each of a plurality of pixel blocks of the output image, a co-located pixel in each of the combined down-sampled versions and provide the output image for display.

[0013] A super resolution processing method is provided which improves processing performance. The method includes receiving an input image having a first resolution, generating linear down-sampled versions of the input image by down-sampling the input image via a linear upscaling network and generating non-linear down-sampled versions of the input image by down-sampling the input image via a non-linear upscaling network. The method also includes converting the down-sampled versions of the input image into pixels of an output image having a second resolution higher than the first resolution and providing the output image for display.

It uses Inferencing for upscaling. As will all ML models, how you assemble the layers, what kind of parameters you choose, which activation functions you choose etc, matters a lot, and the difference could be night and day in accuracy, performance and memory

1621500232759.png

1621500205339.png
 
Last edited:

biostud

Lifer
Feb 27, 2003
18,193
4,674
136
I used a similar feature a lot in the year i expend playing with a 2200G as my main pc... where you would turn on "hardware gpu scaling" and wharever fullscreen game no matter the resolution, it would get upscaled to native monitor resolution.

I was a very usefull feature, the only problem is that i was forced to use fullscreen in games. This seems like an update of that tech.

It supports APUs? because this is very very usefull for apu gaming.
Iirc only those with RDNA cores, not older with Vega graphics.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
6,717
7,013
136
While I am really glad AMD is implementing this sort of thing at the driver level. I always really liked AMD's driver package when I had my HD7950, I didn't need as much 3rd party crap to do basic things like overclocking the way I do with NV software.

That being said, I'll say the same thing I say in every DLSS thread: This is the kind of software you use for the last 1-2 years of your GPU's life to wring out a bit more performance before an upgrade or if you're close to a generational turn-over.

It really shouldn't be something we come to rely on for contemporary games to work at a card's given performance level.
 

biostud

Lifer
Feb 27, 2003
18,193
4,674
136
While I am really glad AMD is implementing this sort of thing at the driver level. I always really liked AMD's driver package when I had my HD7950, I didn't need as much 3rd party crap to do basic things like overclocking the way I do with NV software.

That being said, I'll say the same thing I say in every DLSS thread: This is the kind of software you use for the last 1-2 years of your GPU's life to wring out a bit more performance before an upgrade or if you're close to a generational turn-over.

It really shouldn't be something we come to rely on for contemporary games to work at a card's given performance level.
Normally I would agree but there are instances we're it makes good sense. If you have a midrange card to drive a 1440p screen or have a 4k screen and prefer high fps over a little quality loss. In fast paced games you are less likely to see the difference. Also if it allows you to enable raytracing and still get playable framerates. And as long as the option is available, then we can all choose the settings we prefer. :)
 
  • Like
Reactions: Tlh97 and blckgrffn

Saylick

Diamond Member
Sep 10, 2012
3,084
6,184
136

blckgrffn

Diamond Member
May 1, 2003
9,110
3,029
136
www.teamjuchems.com
I am hoping this means we'll get "performance RT" settings for PS5 and SX that allow for "4K"/60fps/RT lighting as an option.

Right now you have to pick Pretty 30 FPS or Not as Pretty 60 FPS and I would probably be fine with some (more) upscaling to get closer to having my cake and eating it too.

Even if it is just a SX thing, it will likely drive adoption of the feature for devs who can't justify the investment for "real" optimization.

Bonus points for working on nvidia cards that are solid but don't support DLSS. Should help the 1660's that are still getting sold in volume live long lives.

Also, may this be the final nail in the "BUT DLSS" coffin. If its difference is marginal, then whatever. I am in the camp where I think native resolution is the best resolution, but get tired of every review having a giant asterisk about 3-5 years down the road you'd want nvidia for DLSS. Meh.
 

zebrax2

Senior member
Nov 18, 2007
972
62
91
Few observations on native vs quality
The sharpening in FSR 2.0 is quite apparent compared to native. Upside is that the details on some textures is more visible than native, downside is that its less effective on some textures creating a mishmash of sharp and blurry texture. On certain edges FSR 2.0 look like it has less aliasing.

Scene 1
It looks like FSR 2.0 intriduces some moire on the TV screens

Scene 2
Weird black spots on the castle walls but the wooden cable reel/table on the other hand looks better than native (the red texture on top of table/cable reel has artifacts on native not visible in FSR 2.0)
 
Mar 11, 2004
23,031
5,495
146
While I am really glad AMD is implementing this sort of thing at the driver level. I always really liked AMD's driver package when I had my HD7950, I didn't need as much 3rd party crap to do basic things like overclocking the way I do with NV software.

That being said, I'll say the same thing I say in every DLSS thread: This is the kind of software you use for the last 1-2 years of your GPU's life to wring out a bit more performance before an upgrade or if you're close to a generational turn-over.

It really shouldn't be something we come to rely on for contemporary games to work at a card's given performance level.

I don't agree. For starters, this isn't just tacked on. People made the same arguments when the more efficient versions of AA were being developed. And then that stuff started being integrated into the games/engines. Granted I still see people complaining about that when overall graphics have actually improved. And by far, still (always has been and always will), the biggest issue with regards to visuals is art direction. I wish there was much more focus on that instead of nitpicking slightly jagged rendering of an electric power line when you zoom in 4x+ on a static image.

I'd argue that its not just that it is being integrated more, but that its becoming essential for games to do what they can to cheat to offer comparable visuals at high resolutions and framerates. Pushing hybrid ray tracing, and if we're going to try to make the move to AR/VR we're going to need both higher resolutions and higher framerates.
 

Saylick

Diamond Member
Sep 10, 2012
3,084
6,184
136
I just got through the FSR2.0 presentation... I don't have a background in computer science so a lot of it flies over my head, but it appears that the upscaling algorithm used in FSR2.0 is again Lanczos, which was used in FSR1.0. The difference this time being that FSR2.0 gives the algorithm more samples to work with since it is a temporal upscaler. Each sample is assigned a weight, where samples closest to the center of the pixel and those closest to the current frame in time are weighted more, and the final output is the sum of each sample multiplied by its respective weight. As I understand it, DLSS does away with Lanczos and the the weights are determined using a neural network, which has been trained on 16K images beforehand. Looks like AMD also implemented some case-specific optimizations to mitigate typical temporal upscaler issues, such as ghosting and inability to render thin lines.

Perhaps someone with a background more suitable than mine can also weigh in?
 
  • Like
Reactions: scineram

GodisanAtheist

Diamond Member
Nov 16, 2006
6,717
7,013
136
I don't agree. For starters, this isn't just tacked on. People made the same arguments when the more efficient versions of AA were being developed. And then that stuff started being integrated into the games/engines. Granted I still see people complaining about that when overall graphics have actually improved. And by far, still (always has been and always will), the biggest issue with regards to visuals is art direction. I wish there was much more focus on that instead of nitpicking slightly jagged rendering of an electric power line when you zoom in 4x+ on a static image.

I'd argue that its not just that it is being integrated more, but that its becoming essential for games to do what they can to cheat to offer comparable visuals at high resolutions and framerates. Pushing hybrid ray tracing, and if we're going to try to make the move to AR/VR we're going to need both higher resolutions and higher framerates.

- My big worry with upscaling tech is that devs will use it as a get out of jail free card in terms of performance tuning and what-have-you. Why waste development hours being smart in your level design and programing and everything else when instead you can just release a game that runs like dog**** and tell everyone to use DLSS or FSR to get acceptable framerates?

If anything, a pretty common complaint nowadays is that games don't really look *that much better* for substantially worse performance than games that came out just a few years ago.

I think its a weird race to the bottom between devs being lazy in their optimization and the chipmakers providing the crutches to ensure people get 60FPS out of their $2000 cards.
 

blckgrffn

Diamond Member
May 1, 2003
9,110
3,029
136
www.teamjuchems.com
- My big worry with upscaling tech is that devs will use it as a get out of jail free card in terms of performance tuning and what-have-you. Why waste development hours being smart in your level design and programing and everything else when instead you can just release a game that runs like dog**** and tell everyone to use DLSS or FSR to get acceptable framerates?

If anything, a pretty common complaint nowadays is that games don't really look *that much better* for substantially worse performance than games that came out just a few years ago.

I think its a weird race to the bottom between devs being lazy in their optimization and the chipmakers providing the crutches to ensure people get 60FPS out of their $2000 cards.

But of course this is what will happen - to varying degrees.

It costs time (money/expertise) to optimize more. It's way more profitable to push high end settings to us and our hardware for most of these companies.

It will come down the chops and the willingness of the developer to invest in optimization of their title, and this is likely to vary quite a bit.

This is just another tool in the toolkit in the push for HFR 4K w/RT.
 

uzzi38

Platinum Member
Oct 16, 2019
2,565
5,574
146

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
28,274
19,921
146
Deathloop confirmed to be the first title with FSR 2.0 available in 2 days time (May 12th).


Not long before we get some comparisons then. Reviewers may already even have access tbh.
Better IQ is always nice. But for me, the fact you don't have to shell out for an RTX card to use it, made it a winner from the get go. Comparing it to DLSS has always been FUD for that reason. "Hey guyz! Let's compare a hardware agnostic solution to one that's proprietary and only works on RTX cards, that's legit right?" :p
 

Saylick

Diamond Member
Sep 10, 2012
3,084
6,184
136
According to Videocardz, the following games are first to get FSR2.0:
DEATHLOOP
Asterigos, Delysium
EVE Online
Farming Simulator 22
Forspoken
Grounded
Microsoft Flight Simulator
NiShuiHan
Perfect World Remake
Swordsman Remake
Unknown 9: Awakening

Of course, more games will support FSR2.0 moving forward. It's already baked into Microsoft's tools for Xbox game development, so that should help it gain traction when those games get PC versions.
 

blckgrffn

Diamond Member
May 1, 2003
9,110
3,029
136
www.teamjuchems.com
Microsoft Flight Simulator is one of the biggest deal titles on there. Doing that title in VR is a big load, it devours anything available now. On the MSFS forums people are absolutely purpose building rigs for it.

I know they are doing a lot with that game/sim, working on a DX12 renderer, etc. More improvements to the game to scale better is going to ultimately be healthy for MSFS's ecosystem and for the people attempting to run it.
 

Makaveli

Diamond Member
Feb 8, 2002
4,715
1,049
136
Deathloop confirmed to be the first title with FSR 2.0 available in 2 days time (May 12th).


Not long before we get some comparisons then. Reviewers may already even have access tbh.

So does this mean we also get the driver that has this in 2 days time or has amd posted an official launch date for FSR 2.0?
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
I had to chuckle at farming simulator having it. But being a lot of their market most likely doesn't have high end machines, it makes sense.
 

Saylick

Diamond Member
Sep 10, 2012
3,084
6,184
136
Looks like TPU got their hands on an early preview??

AMD has achieved the unthinkable—the new FidelityFX Super Resolution FSR 2.0 looks amazing, just as good as DLSS 2.0, actually DLSS 2.3 (in Deathloop). Sometimes even slightly better, sometimes slightly worse, but overall this is a huge win for AMD. Take a look at our comparison images—there's a huge improvement when comparing FSR 1.0 to FSR 2.0. The comparison to "Native" or "Native+TAA" also always looks worse than FSR 2.0, which is somewhat expected. When comparing "DLSS Quality" against "FSR 2.0 Quality" it is possible to spot minor differences, but for every case that I found I'd say it's impossible to declare one output better than the other, it's pretty much just personal preference, or not even that.

https://www.techpowerup.com/review/amd-fidelity-fx-fsr-20/3.html
 

blckgrffn

Diamond Member
May 1, 2003
9,110
3,029
136
www.teamjuchems.com
If this works like the review purports, it's a win for everyone. Developers can target one implementation, tons of nvidia cards are supported and obviously AMD gets a big feather in their cap for showing that they could pull it off and for having "feature parity" with DLSS, which so many outlets absolutely obsess over. Gamers can tick a box and most won't be able to tell the difference in IQ most of the time but they'll notice framerates up over 60FPS.

Also, guaranteed support on a huge breadth of titles because of the way it can help extend the console lifecycle and let more developers release "60 FPS/4K/RT" games.