• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Discussion AMD Gaming Super Resolution GSR

Page 11 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DisEnchantment

Golden Member
New Patent came up today for AMD's FSR




20210150669
GAMING SUPER RESOLUTION

Abstract
A processing device is provided which includes memory and a processor. The processor is configured to receive an input image having a first resolution, generate linear down-sampled versions of the input image by down-sampling the input image via a linear upscaling network and generate non-linear down-sampled versions of the input image by down-sampling the input image via a non-linear upscaling network. The processor is also configured to convert the down-sampled versions of the input image into pixels of an output image having a second resolution higher than the first resolution and provide the output image for display


[0008] Conventional super-resolution techniques include a variety of conventional neural network architectures which perform super-resolution by upscaling images using linear functions. These linear functions do not, however, utilize the advantages of other types of information (e.g., non-linear information), which typically results in blurry and/or corrupted images. In addition, conventional neural network architectures are generalizable and trained to operate without significant knowledge of an immediate problem. Other conventional super-resolution techniques use deep learning approaches. The deep learning techniques do not, however, incorporate important aspects of the original image, resulting in lost color and lost detail information.

[0009] The present application provides devices and methods for efficiently super-resolving an image, which preserves the original information of the image while upscaling the image and improving fidelity. The devices and methods utilize linear and non-linear up-sampling in a wholly learned environment.

[0010] The devices and methods include a gaming super resolution (GSR) network architecture which efficiently super resolves images in a convolutional and generalizable manner. The GSR architecture employs image condensation and a combination of linear and nonlinear operations to accelerate the process to gaming viable levels. GSR renders images at a low quality scale to create high quality image approximations and achieve high framerates. High quality reference images are approximated by applying a specific configuration of convolutional layers and activation functions to a low quality reference image. The GSR network approximates more generalized problems more accurately and efficiently than conventional super resolution techniques by training the weights of the convolutional layers with a corpus of images.

[0011] A processing device is provided which includes memory and a processor. The processor is configured to receive an input image having a first resolution, generate linear down-sampled versions of the input image by down-sampling the input image via a linear upscaling network and generate non-linear down-sampled versions of the input image by down-sampling the input image via a non-linear upscaling network. The processor is also configured to convert the down-sampled versions of the input image into pixels of an output image having a second resolution higher than the first resolution and provide the output image for display.

[0012] A processing device is provided which includes memory and a processor configured to receive an input image having a first resolution. The processor is also configured to generate a plurality of non-linear down-sampled versions of the input image via a non-linear upscaling network and generate one or more linear down-sampled versions of the input image via a linear upscaling network. The processor is also configured to combine the non-linear down-sampled versions and the one or more linear down-sampled versions to provide a plurality of combined down-sampled versions. The processor is also configured to convert the combined down-sampled versions of the input image into pixels of an output image having a second resolution higher than the first resolution by assigning, to each of a plurality of pixel blocks of the output image, a co-located pixel in each of the combined down-sampled versions and provide the output image for display.

[0013] A super resolution processing method is provided which improves processing performance. The method includes receiving an input image having a first resolution, generating linear down-sampled versions of the input image by down-sampling the input image via a linear upscaling network and generating non-linear down-sampled versions of the input image by down-sampling the input image via a non-linear upscaling network. The method also includes converting the down-sampled versions of the input image into pixels of an output image having a second resolution higher than the first resolution and providing the output image for display.

It uses Inferencing for upscaling. As will all ML models, how you assemble the layers, what kind of parameters you choose, which activation functions you choose etc, matters a lot, and the difference could be night and day in accuracy, performance and memory

1621500232759.png

1621500205339.png
 
Last edited:
Yes... and requires a bigger more expensive chip.

- And it ties you to the NV ecosystem. Nintendo is a pretty big dog all their own, last thing they'll want to do is hem themselves in with a single provider due to a proprietary software feature.

If anything, I could see Nintendo insisting NV post DLSS over to run on general compute shaders.
 
Yes... and requires a bigger more expensive chip.
There is a sizable performance hit from doing FSR 2 and it's larger on lower end chips because it's using the shader cores. Tensor cores do not take a lot of die space, it's more efficient to use them. Dedicated hardware is more efficient then general purpose hardware.

- And it ties you to the NV ecosystem. Nintendo is a pretty big dog all their own, last thing they'll want to do is hem themselves in with a single provider due to a proprietary software feature.

If anything, I could see Nintendo insisting NV post DLSS over to run on general compute shaders.
This doesn't make any sense. It's a console you are tied into an eco system anyway, if it's an NV chip you are tied into using NV anyway, all consoles are coded to the metal which is very proprietary.
 
- And it ties you to the NV ecosystem. Nintendo is a pretty big dog all their own, last thing they'll want to do is hem themselves in with a single provider due to a proprietary software feature.
Nintendo is going to keep using Nvidia.

Before Nvidia Nintendo's tool chain honestly had been a total mess. Nvidia's main selling point is its software, ecosystem and tool chain.

Nintendo is not a company systematically building up systems and frameworks over time. Instead they like to build fresh stuff as projects and drop them again one or more generations onward. Nvidia offers a solid base to build upon for Nintendo's and third party developers from which I can't see them wanting to move away from.
 
Yes, they'll strive to show all the places FSR 2.0 fails and this is GOOD.
Thing is, if you look for a evaluation that was not done by Nvidia's marketing department, you will see they point out places where the latest version of DLSS also falls on its face.

Hardware Unboxed did a good one like this.


Focusing on the problems with one product, while ignoring all other problems of the other product is not:
"The best investigation thus far. "
"GOOD "

It is marketing.
 
That's really cool and encouraging that such mods are possible! There are plenty old games that could profit but don't see any official support by their developers anymore. Though the amount of games supporting DLSS through a dynamic library used for this hack is obviously smaller.
 
That's really cool and encouraging that such mods are possible! There are plenty old games that could profit but don't see any official support by their developers anymore. Though the amount of games supporting DLSS through a dynamic library used for this hack is obviously smaller.
Every game that supports DLSS in general is potentially open to a similar solution. So I mean, it's not a massive list of games, but it's quite a few times the number of FSR 2.0 native games lol
 
FSR2 Implementation to DLSS2 titles in matter of hours?
Yeah, as long as DLSS in implemented using dynamic libraries (DLL files) that can be used to hook up FSR 2.0 instead. That's essentially what we were talking about before.
 
This came out a while ago, but RNDA3 is meant to get an AMD tensor core equivalent: https://www.guru3d.com/news_story/tensor_core_equivalent_likely_to_get_embedded_in_amd_rdna3.html

This is no suprise to anyone as the only way they could compete on an equal footing with DLSS is with the same custom hardware.
That's better placed in the RDNA3 thread since FSR 2.0 (which this thread is about) already showed that GPUs don't need tensor cores at all to compete on an equal footing with DLSS. But tensor cores can be useful for other stuff.
 
Last edited:
AMD's MO has always been about doing more with less (or in many cases generalized hardware). Its the Alton Brown GPU design philosophy: never have a single purpose compute unit in your GPU, everything should have at least 2 or more functions at any given time.

I think its just the product of being the underdog and market follower. 10 years of having to be outrageously cost conscious against their competition everywhere will do that to a company.

That's a lot of words to say I'd bet dollars to donuts that AMD is not putting in any fixed function tensor hardware into their GPUs.
 
Back
Top