• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Question Speculation: RDNA2 + CDNA Architectures thread

Page 195 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Saylick

Golden Member
Sep 10, 2012
1,059
974
136
Pardon the interruption but, if the talk by Bill Dally, chief scientist at Nvidia, is anything to go by, I don't think Nvidia will move away from fixed function units to handle raytracing anytime soon, nor AMD BTW.
He makes a compelling case for them as he argues that that type of unit is the way to continue to improve performance per watt as Dennard scaling is no more. I think AMD is aware of this and they will introduce more of those types of units on RDNA3 as future nodes would allow them to dedicate more transistors to them and further improve on this metric.

Not that it matters, but I believe AMD did the right thing when they decided to go the IC route so as to keep the data on chip and not go to memory as much. In the future they'll have to introduce specialised hardware judiciously if they intend to keep up with the trend they currently have.
I think this is a valid point, but I don't see a scenario in the future where we go back to GPUs with purely fixed function units like they were back in the early-2000s. I'm far from an expert on this, but there might be more advanced RT algorithms or methods that get developed in the future that results in the RT unit becoming more flexible or programmable in nature, especially if there comes a point in time where we can cast so many rays per frame that a denoising step isn't necessary (i.e. FPS is more correlated to the efficacy of the RT units themselves), but that would support the argument that we're going less fixed function, not more fixed function. If this is true, then what has happened is that the RT unit has supplanted the programmable shader unit as the standard building block or execution unit of the GPU, but it doesn't make the execution unit any more fixed function.

EDIT: Rephrased for clarity.
 
Last edited:

Mopetar

Diamond Member
Jan 31, 2011
6,180
3,012
136
- A 50% performance hit on 300FPS is preferable to a 50% hit on 60 FPS.

Remasters should be the RTRT playground.

Cheap and easy way to make something old new again...
No, I meant it in the absolute terms. It makes sense that if you try to apply the same kind of RT as you would to a modern game to an older title it doesn't matter that it could normally run at 300 FPS (as opposed to say 120 FPS for a modern game), it's still absolutely limited by whatever takes the most time to finish. Since the RT is handled by specialized hardware on the GPUs you can't trade regular shader performance for faster RT in a general sense. What this means is that you're ultimately bound by how fast the RT work that needs that specialized hardware can be done and the rest of the GPU goes underutilized. If you tax that hard enough, it doesn't even matter if you're just rendering a few simple objects, it will still limit the overall frame rate that can be achieved just like a CPU will bottleneck FPS at low enough resolutions. Even Quake II RTX which is an ancient game that could run well above 300 FPS on a modern GPU only gets 35 FPS at 4K on a 3090.

However, the benefit of all of this is that if you just leave it at the native 1080p (if that) resolution that the game was originally made for then there's less compromise of performance for the sake of adding in RT because the game will still likely run at over 60 FPS. Sure you could get 4K textures and whatnot so the game could be run at higher resolutions, but that's starting to get expensive on top of adding in all of the RT and making that work. Otherwise you go to all the trouble of essentially reworking the game to modern graphical standards only to wind up in exactly the same predicament where you can get faster frame rates at 4K without the ray tracing, but turning it on tanks the frame rates to what's now regarded as unacceptable levels.
 
  • Like
Reactions: Tlh97 and Saylick

Timorous

Senior member
Oct 27, 2008
670
764
136
Looks like Ray Tracing in WoW heavily favours AMD at the moment.


I think it is too early to say exactly who is better with RT. We need some neutral games with a neutral implementation to be released so we can compare properly because at the moment it seems as though devs can optimise for one implementation over the other quite easily and really skew the results.
 
  • Like
Reactions: Elfear

Shivansps

Diamond Member
Sep 11, 2013
3,365
978
136
Im not sure who compared RT to PhysX, but is right, i havent trought of it, but it is happening exactly the same, Nvidia is wasting die space for specific feature that games really over use it, when its really not needed. This exact thing happened with PhysX, and we all forgot about it, why? PhysX games today are like any other game, why? they use the features that are needed, in the right amount, where it makes sence, and that runs on CPU very well.

When i see RT in game that makes everything shinny and reflective, to the point some stuff is like a mirror, even the AMD demo has this, that is overusing the RT tech. Thus the bad perf. In time, RT will become more like PhysX, we all going to forget its there.

What i dont think that is going away is using AI tech to render stuff at a lower resolution and scale it up, in fact that may be the future in game rendering, specially if you could integrate into the engine and use it to render part of the image. I think the time of pure 100% native rendering is coming to a end.
 
  • Like
Reactions: prtskg and Mopetar

GodisanAtheist

Diamond Member
Nov 16, 2006
3,364
1,916
136
No, I meant it in the absolute terms. It makes sense that if you try to apply the same kind of RT as you would to a modern game to an older title it doesn't matter that it could normally run at 300 FPS (as opposed to say 120 FPS for a modern game), it's still absolutely limited by whatever takes the most time to finish. Since the RT is handled by specialized hardware on the GPUs you can't trade regular shader performance for faster RT in a general sense. What this means is that you're ultimately bound by how fast the RT work that needs that specialized hardware can be done and the rest of the GPU goes underutilized. If you tax that hard enough, it doesn't even matter if you're just rendering a few simple objects, it will still limit the overall frame rate that can be achieved just like a CPU will bottleneck FPS at low enough resolutions. Even Quake II RTX which is an ancient game that could run well above 300 FPS on a modern GPU only gets 35 FPS at 4K on a 3090.

However, the benefit of all of this is that if you just leave it at the native 1080p (if that) resolution that the game was originally made for then there's less compromise of performance for the sake of adding in RT because the game will still likely run at over 60 FPS. Sure you could get 4K textures and whatnot so the game could be run at higher resolutions, but that's starting to get expensive on top of adding in all of the RT and making that work. Otherwise you go to all the trouble of essentially reworking the game to modern graphical standards only to wind up in exactly the same predicament where you can get faster frame rates at 4K without the ray tracing, but turning it on tanks the frame rates to what's now regarded as unacceptable levels.
-True, but the lower bound of that limit is still fairly high in terms of remaining playable, as Quake RTX has shown (game uses only RTRT and no raster hardware). A hybrid scenario can still remain relatively playable at higher resolutions and settings.

Since old engines already have a functional lighting model, allowing players to turn on/off RTRT functionality a la carte would be nice. Things like Global Illumination/RT Ambient Occlusion/RT shadows could provide a major graphics overhaul for an acceptable performance hit.

Reflections might be a bit harder to implement due to the way some older games were designed to avoid or not consider reflective surfaces.

At the end of the day, I think people would tank the FPS hit on some older games because they've already played them before or because disabling RTRT features just returns the game to it's original state.

Good way for devs to cut their teeth on implementation methods without turning their big AAA game into a Guinea Pig.

People were already making "Can it play Crysis" jokes about RT in the remaster, so people are primed for a big performance hit already. I think everyone wants to see what RTRT can really do all guns blazing, and remastered games would be a nice control for that.
 

Ajay

Diamond Member
Jan 8, 2001
9,485
3,951
136
Eh, wake me up in a couple of generation when RT is main steam with really good hardware and game support (or becomes a second rate feature due to the lack of both).
 

kurosaki

Senior member
Feb 7, 2019
257
247
86
Im not sure who compared RT to PhysX, but is right, i havent trought of it, but it is happening exactly the same, Nvidia is wasting die space for specific feature that games really over use it, when its really not needed. This exact thing happened with PhysX, and we all forgot about it, why? PhysX games today are like any other game, why? they use the features that are needed, in the right amount, where it makes sence, and that runs on CPU very well.

When i see RT in game that makes everything shinny and reflective, to the point some stuff is like a mirror, even the AMD demo has this, that is overusing the RT tech. Thus the bad perf. In time, RT will become more like PhysX, we all going to forget its there.

What i dont think that is going away is using AI tech to render stuff at a lower resolution and scale it up, in fact that may be the future in game rendering, specially if you could integrate into the engine and use it to render part of the image. I think the time of pure 100% native rendering is coming to a end.
Upscaling solutions like dlss is a hack that eats away huge amounts of die space, and always will. There is no future in upscaling, it's just an excuse to fill those tensors up with, something.
 
  • Like
Reactions: Tlh97 and KompuKare

DeathReborn

Platinum Member
Oct 11, 2005
2,368
281
126
Im not sure who compared RT to PhysX, but is right, i havent trought of it, but it is happening exactly the same, Nvidia is wasting die space for specific feature that games really over use it, when its really not needed. This exact thing happened with PhysX, and we all forgot about it, why? PhysX games today are like any other game, why? they use the features that are needed, in the right amount, where it makes sence, and that runs on CPU very well.

When i see RT in game that makes everything shinny and reflective, to the point some stuff is like a mirror, even the AMD demo has this, that is overusing the RT tech. Thus the bad perf. In time, RT will become more like PhysX, we all going to forget its there.

What i dont think that is going away is using AI tech to render stuff at a lower resolution and scale it up, in fact that may be the future in game rendering, specially if you could integrate into the engine and use it to render part of the image. I think the time of pure 100% native rendering is coming to a end.
There was dedicated PhysX hardware inside the GPU? Last time I checked it used the CUDA cores, on G80 & co it required 32+ cores & 256MB VRAM.
 

Mopetar

Diamond Member
Jan 31, 2011
6,180
3,012
136
-True, but the lower bound of that limit is still fairly high in terms of remaining playable, as Quake RTX has shown (game uses only RTRT and no raster hardware). A hybrid scenario can still remain relatively playable at higher resolutions and settings.
I don't think it shows that at all, rather the opposite. Quake II is over 20 years old and it reduces the most powerful GPU available right now to a frame rate that no competitive gamer would consider remotely acceptable. Even if it did use a hybrid approach it's still ultimately bound by the slowest processing which is the ray tracing. You only improve performance to the extent that you can remove some amount of ray tracing. If you still used the same amount of ray tracing even with rasterization being employed in some fashion you get the same performance. That's why we see all of the modern titles with a hybrid approach still have the performance drop unless the RT effects are limited. Use a lot of RT and the performance drops regardless of whether it's an old game or a newer title.

The only time you start to trade off performance is when it's so high that it's pointless. If you're running Quake II RTX at 1080p the 3090 will get over 100 FPS and that's approaching the limitations of most displays. Without RT the game would probably run at something absurdly ridiculous and be bottlenecked by the CPU, but exceed the refresh rate of any display well before that. You can use RT effects reasonably well if you're willing to run at 1080p. However, you have to use a high-end 4K GPU to do so and that's outside of the price range for most people who would normally game at 1080p because those cards either don't have any RT hardware (16xx series) or so little of it that it's the same problem as the high-end cards trying to run RT in the resolutions they target with their raster performance.

High resolution, ray tracing effects, and acceptable frame rates. Pick two.
 
  • Like
Reactions: Ajay

Shivansps

Diamond Member
Sep 11, 2013
3,365
978
136
Upscaling solutions like dlss is a hack that eats away huge amounts of die space, and always will. There is no future in upscaling, it's just an excuse to fill those tensors up with, something.
Its not a hack, its creating a image of higher resolution than it was, and while it will never be the same quality as rendering at native resolution, the results tends to be better than one rendered at a middle resolution. EX: 720P to 1440P is generally better than 1080P native.

You are only saying that because you have zero idea of how hard it is to render everything at 4K native and you are not thinking forward from there. Rendering stuff at lower res (or with lower quality), quality levels, and etc is a valid optimisation technique that was used since we have 3D enviroments. And the results are masked by motion blur or some other stuff to hide that. And they arent called "hacks". If game engines can start doing this with individual parts intead of the whole screen it would be huge for the game industry.

At any rate, there are a lot of games that allows for a diferent rendering resolution to be used, and DLSS is a huge improvement over that, if it worked for every gpu. In fact, every game should support diferent screen and rendering resolutions.

There was dedicated PhysX hardware inside the GPU? Last time I checked it used the CUDA cores, on G80 & co it required 32+ cores & 256MB VRAM.
PhysX was a dedicated hardware solution, then Nvidia brought it and ported it to CUDA. That in turn increased the need for more CUDA processors as they were used for non rendering needs in games.
 
Last edited:
  • Like
Reactions: Mopetar

kurosaki

Senior member
Feb 7, 2019
257
247
86
Its not a hack, its creating a image of higher resolution than it was, and while it will never be the same quality as rendering at native resolution, the results tends to be better than one rendered at a middle resolution. EX: 720P to 1440P is generally better than 1080P native.

You are only saying that because you have zero idea of how hard it is to render everything at 4K native and you are not thinking forward from there. Rendering stuff at lower res (or with lower quality), quality levels, and etc is a valid optimisation technique that was used since we have 3D enviroments. And the results are masked by motion blur or some other stuff to hide that. And they arent called "hacks". If game engines can start doing this with individual parts intead of the whole screen it would be huge for the game industry.

At any rate, there are a lot of games that allows for a diferent rendering resolution to be used, and DLSS is a huge improvement over that, if it worked for every gpu. In fact, every game should support diferent screen and rendering resolutions.



PhysX was a dedicated hardware solution, then Nvidia brought it and ported it to CUDA. That in turn increased the need for more CUDA processors as they were used for non rendering needs in games.
Dlss is a tensor infused hack and in no sense the way forward. Going down in iq can never be the aim of the goal. But you were skimming the surface of something launching with DX 12.1! VRS, portions of the screen gets shaded way less. And you can pinpoint places on screen not needing per pixel shading and apply / not apply the same process on four pixels instead of one.
Upscaling is dead from the start though. I hope Nvidia quits this nonsense dinner than later.
 
  • Like
Reactions: Tlh97 and KompuKare

Shivansps

Diamond Member
Sep 11, 2013
3,365
978
136
Dlss is a tensor infused hack and in no sense the way forward. Going down in iq can never be the aim of the goal. But you were skimming the surface of something launching with DX 12.1! VRS, portions of the screen gets shaded way less. And you can pinpoint places on screen not needing per pixel shading and apply / not apply the same process on four pixels instead of one.
Upscaling is dead from the start though. I hope Nvidia quits this nonsense dinner than later.
Fine, go ahead and tell every company that ever released a game for PS4/Xbox One to stop using DRS, tell Unity3D, Frostbite, Unreal Engine, etc to remove DR support on PC... because they are all wrong.

This may be suprise for you, but most people do not run on super high end GPUs capable of render any game at any resolution at the fps they want and are able to replace these cards every gen.
 

Mopetar

Diamond Member
Jan 31, 2011
6,180
3,012
136
I don't think DLSS is bad in and of itself, but the reliance on specialized hardware that isn't particularly useful for much else in gaming workloads as well as being a proprietary implementation should give people some pause.

I also don't think it's fair to dislike it simply due to how it's being used to compensate for lackluster RT performance crippling games at higher resolutions. If it were being sold as a way to still be able run games at 4K in four years when a card is showing its age and the alternative is running at a lower resolution that the monitor has to upscale, would you dislike it? At that point it's a way to get some extra life out of a card you probably spent a good deal of money on at the time you bought it.

Blaming DLSS for how it's being misused is like trying to fault a pencil for a poor exam grade.
 

dr1337

Member
May 25, 2020
160
253
96
would you dislike it?
As it stands it completely depends on implementation. Some games pull DLSS off while others it objectively is worse than just rendering at native and turning down quality settings. I do believe with kurosaki 100% that going down in IQ is nonsense and problematic. DLSS in its original incarnation was completely a hack to get better frame rates out of raytracing from early Turing cards. Nvidia only sells gaming GPUs with tensor cores specifically so they can reuse dies for the professional lines and save cost. DLSS itself specifically only exists because of nvidia cutting corners in a sense. Also while the hypothetical is nice, the current fact is that DLSS can't be sold as a crutch for old cards & new games because it requires a ton of overhead and vendor specific lock ins. Its distinctly different from most upscaling tech out there because it was created from tensor cores being a solution looking for a problem. Combine this with both hardware and software dependencies that aren't easy to work around, and IMO one very much could say that DLSS is a hack.
 

kurosaki

Senior member
Feb 7, 2019
257
247
86
Fine, go ahead and tell every company that ever released a game for PS4/Xbox One to stop using DRS, tell Unity3D, Frostbite, Unreal Engine, etc to remove DR support on PC... because they are all wrong.

This may be suprise for you, but most people do not run on super high end GPUs capable of render any game at any resolution at the fps they want and are able to replace these cards every gen.
Well, it does not get better by wasting precious die space with unwanted logic. What if they used the die area for more RT cores instead? Then, just maybe, the cards would have rendered well lit AND nice looking images...
 

Shivansps

Diamond Member
Sep 11, 2013
3,365
978
136
I don't think DLSS is bad in and of itself, but the reliance on specialized hardware that isn't particularly useful for much else in gaming workloads as well as being a proprietary implementation should give people some pause.

I also don't think it's fair to dislike it simply due to how it's being used to compensate for lackluster RT performance crippling games at higher resolutions. If it were being sold as a way to still be able run games at 4K in four years when a card is showing its age and the alternative is running at a lower resolution that the monitor has to upscale, would you dislike it? At that point it's a way to get some extra life out of a card you probably spent a good deal of money on at the time you bought it.

Blaming DLSS for how it's being misused is like trying to fault a pencil for a poor exam grade.
Who says that DLSS cant be implemented under DirectML whiout needing specialised hardware?. Anyway, AI accelerators is a thing that should be used for more than DLSS in the future.

Well, it does not get better by wasting precious die space with unwanted logic. What if they used the die area for more RT cores instead? Then, just maybe, the cards would have rendered well lit AND nice looking images...
First, i dont see anything wrong with having specialised hardware for AI acceleration, looking forward AI tech will be used in a lot more programs in the future.

Second, a DLSS alternative can be implemented under DirectML.
 

Mopetar

Diamond Member
Jan 31, 2011
6,180
3,012
136
As it stands it completely depends on implementation. Some games pull DLSS off while others it objectively is worse than just rendering at native and turning down quality settings. I do believe with kurosaki 100% that going down in IQ is nonsense and problematic. DLSS in its original incarnation was completely a hack to get better frame rates out of raytracing from early Turing cards. Nvidia only sells gaming GPUs with tensor cores specifically so they can reuse dies for the professional lines and save cost. DLSS itself specifically only exists because of nvidia cutting corners in a sense. Also while the hypothetical is nice, the current fact is that DLSS can't be sold as a crutch for old cards & new games because it requires a ton of overhead and vendor specific lock ins. Its distinctly different from most upscaling tech out there because it was created from tensor cores being a solution looking for a problem. Combine this with both hardware and software dependencies that aren't easy to work around, and IMO one very much could say that DLSS is a hack.
This entire argument is a disagreement over how it's being used. I don't think that should be a criticism of DLSS itself. My entire point is that would you have any specific reason to dislike DLSS if it weren't being used that way and instead used in a manner that's intentionally hard to dislike on the face of things? Being a proprietary implementation (similar to GSync) as opposed to an open standard is about it as far as I'm concerned.

Who says that DLSS cant be implemented under DirectML whiout needing specialised hardware?. Anyway, AI accelerators is a thing that should be used for more than DLSS in the future.
I suppose it's necessary to distinguish between DLSS as a general idea and DLSS the Nvidia specific implementation.

You probably can implement it without specialized hardware, just like you could implement ray tracing without any special hardware or even generate computer graphics without using an actual GPU. It does raise a question of how much performance suffers without hardware capable of efficiently performing some computations.

The crux of the matter is whether or not the hardware that is needed to make such a solution efficient enough to use can also be used for other purposes or whether it's so fixed function that it has no other applications. If it's the latter there's a further question as to whether it's worth the die space.
 

soresu

Golden Member
Dec 19, 2014
1,687
871
136
Eeenteresting.

Apparently someone has built a Rust based CUDA to Intel's 'Level Zero' coding interface layer.

Here's hoping that they can extend it to AMD HIP too so that we can get nVidia OptiX running on AMD GPU's and finally free commercial RT renderers from nVidia's dominion.

Link here.
 

JoeRambo

Golden Member
Jun 13, 2013
1,246
1,183
136
Here's hoping that they can extend it to AMD HIP too so that we can get nVidia OptiX running on AMD GPU's and finally free commercial RT renderers from nVidia's dominion.
Only cards that are actually supported on HIP. Why would anyone bother with HIP if AMD does not? Recent cards like Navi10 are not supported yet.
People are using CUDA for simple reason - it just works, on AMD consumer cards you either suffer OpenCL or buy Nvidia card that actually works and is supported from day one.
 
  • Haha
Reactions: Krteq

soresu

Golden Member
Dec 19, 2014
1,687
871
136
Only cards that are actually supported on HIP. Why would anyone bother with HIP if AMD does not? Recent cards like Navi10 are not supported yet.
People are using CUDA for simple reason - it just works, on AMD consumer cards you either suffer OpenCL or buy Nvidia card that actually works and is supported from day one.
It seems like that is changing as of RX 6000 going by this article on Phoronix.
 
  • Like
Reactions: Tlh97 and moinmoin

moinmoin

Platinum Member
Jun 1, 2017
2,772
3,672
136
Its not a hack, its creating a image of higher resolution than it was, and while it will never be the same quality as rendering at native resolution, the results tends to be better than one rendered at a middle resolution. EX: 720P to 1440P is generally better than 1080P native.

You are only saying that because you have zero idea of how hard it is to render everything at 4K native and you are not thinking forward from there. Rendering stuff at lower res (or with lower quality), quality levels, and etc is a valid optimisation technique that was used since we have 3D enviroments. And the results are masked by motion blur or some other stuff to hide that. And they arent called "hacks". If game engines can start doing this with individual parts intead of the whole screen it would be huge for the game industry.

At any rate, there are a lot of games that allows for a diferent rendering resolution to be used, and DLSS is a huge improvement over that, if it worked for every gpu. In fact, every game should support diferent screen and rendering resolutions.
Just curious since you are quite vocal about AMD putting too few CUs on its APUs: Would you prefer AMD putting logic into its APUs that can accelerate something DLSS-alike instead increasing the CU count?
 

ASK THE COMMUNITY