TPU: many combinations of GPU/resolution/RTX/DLSS not allowed

BFG10K

Lifer
Aug 14, 2000
22,672
2,817
126
https://www.techpowerup.com/252550/nvidia-dlss-and-its-surprising-resolution-limitations

Wow, this just gets better and better. So now even with game support, you basically roll the dice to see if your GPU/RTX/resolution combination allows DLSS.

BF5 2560x1440 works on 2060/2070/2080 but not on 2080Ti. Given the completely nonsensical yes/no combinations of both games, it's obvious some serious puppet strings are being pulled in the background by nVidia in a desperate attempt to mask how garbage these "features" are.

Also what guarantee do we even have that these games' features will work on future hardware?
 
Last edited:

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
No, it'll be kind of technical limits for the most part. Patterns?

Putting DLSS in with the raytracing is a slightly special case.

They haven't enabled it below 4k without the ray tracing - it is possible that it just doesn't work as well at lower resolutions. The only one which surprises me is not having 4k/2060 because that's surely one of the main logical targets?
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
I'm guessing right now they have to compute it for a particular resolution, and number of tensor cores, and Nvidia have only done a few combinations. Hopefully this will improve over time.
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
I think, by its nature, it'll always be generated one resolution at a time. Possibly the output might be specific enough to need certain other major features on/off as well.

Hard to think why generating the neural net would involve the number of tensor cores it has to ultimately run on. It is possible that the 2060 just doesn't have enough tensor core speed to do it fast enough or something.
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
I think, by its nature, it'll always be generated one resolution at a time. Possibly the output might be specific enough to need certain other major features on/off as well.

Hard to think why generating the neural net would involve the number of tensor cores it has to ultimately run on. It is possible that the 2060 just doesn't have enough tensor core speed to do it fast enough or something.
Well if you have less cores and are trying to do a very high res then the generated algorithm would have to be a bit simpler to run a bit faster, where as if you have more cores then it can do more work. Hence I suspect part of the input into the super computer generating the algorithm will be the end gpu's tensor processing power, or at least some range of processing power.
 

BFG10K

Lifer
Aug 14, 2000
22,672
2,817
126
No, it'll be kind of technical limits for the most part. Patterns?
What "technical limit" allows the 2060/2070/2080 to work but not the 2080TI, as in the case of BF5 @ 1440p?

Putting DLSS in with the raytracing is a slightly special case.
But the forum was telling us RTX would become much more viable when combined with DLSS. Now we're being told such a combination is a "special case"?

The only one which surprises me is not having 4k/2060 because that's surely one of the main logical targets?
Really? You're not at all surprised at the BF5 example above?

Many low end cards can't play high resolutions and/or high settings in newer games at a good framerate. Should these features also be disabled when nVidia decides to?

It's painfully obvious what's really going on here:
  1. Either DLSS is more of performance hit than we're led to believe, and nVidia is masking it by cherry-picking the allowable use cases.
  2. Or certain combinations of GPU/RTX/DLSS/resolution don't show enough of a difference between the cards to justify their exorbitant cost differences, so nVidia simply disables the "bad" combos so they can't be benchmarked.
So even if you have an overpriced Turding and the game supports DLSS/RTX, you still have to have nVidia's "blessing" before they allow it, with absolutely no ability to discern what will happen in future games and future hardware. The next DLSS/RTX game may not even allow it at your resolution, and you've paid $1200 for the "privilege" of finding this out.

Where does it say on the GPU box "DLSS/RTX will be arbitrarily restricted at nVidia's discretion"?
 
Last edited:

VirtualLarry

No Lifer
Aug 25, 2001
56,226
9,990
126
This doesn't surprise me at a all, really. This technology is actually a bit more limited than most people realize. I'm glad that TPU is bringing this issue to light, perhaps more people will see the RTX line of cards, for the "garbage" (for lack of a better word) that they are. (Maybe "technological mish-mash" would be a better term?)

Anyways, it's kind of curious, how NV, who basically stripped out compute and unnecessary functionality, to make a "pure gaming" card, in Maxwell, would switch to an "everything, including the kitchen sink (AI/DL)" approach, for a consumer gaming card in the current time-frame.
 
  • Like
Reactions: Arkaign

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
The DLSS + raytracing off for very fast cards at lower resolutions is clearly happening when NV think everything is running easily fast enough without it. That or potentially the DLSS just doesn't all that well at very high refresh rates.

Using DLSS at 4k on a 2060 is something a lot of people would actually want to do, so that's maybe more interesting why it is off.
 
Last edited:

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
I'm guessing right now they have to compute it for a particular resolution, and number of tensor cores, and Nvidia have only done a few combinations. Hopefully this will improve over time.
That is my understanding.

This is entirely expected for now, and it shouldn't really be news.

DLSS is laboriously pre-calculated.

NVIDIA’s DLSS system runs super-sampling on one specific game, over and over again, on the graphics cards in its massive data centers. It computes the best ways to apply the super-sampling technique to a game with repetitive processing on that game’s visuals—the polygons and textures that make up what you see on your screen. The “deep learning” part of the process comes into play here; the system learns as much as it possibly can about the way that the game looks, and how to make it look better.

...

Here’s the rub: the deep learning part of DLSS requires months of processing in NVIDIA’s data centers before it can be applied to PC games. So for every new game that comes out, NVIDIA needs to run its gigantic GPU arrays for a long time in order to get DLSS ready.


Once the heavy lifting is done, NVIDIA will update its GPU drivers and enable DLSS on the new games, at which point the developer can enable it by default or allow it as a graphics option in the settings menu. Because the deep learning system has to look at the geometry and textures of each game individually to improve the performance of that specific game, there’s no way around this “one game at a time” approach. It will get faster as NVIDIA improves it—possibly shaving the time down to weeks or days for one game—but at the moment it takes a while.

https://www.howtogeek.com/401624/what-is-nvidia-dlss-and-how-will-it-make-ray-tracing-faster/
 
Last edited:

pauldun170

Diamond Member
Sep 26, 2011
9,133
5,072
136
https://www.techpowerup.com/252550/nvidia-dlss-and-its-surprising-resolution-limitations

Wow, this just gets better and better. So now even with game support, you basically roll the dice to see if your GPU/RTX/resolution combination allows DLSS.

BF5 2560x1440 works on 2060/2070/2080 but not on 2080Ti. Given the completely nonsensical yes/no combinations of both games, it's obvious some serious puppet strings are being pulled in the background by nVidia in a desperate attempt to mask how garbage these "features" are.

Also what guarantee do we even have that these games' features will work on future hardware?


From your link

A lot of words to basically say: if the frame rate is too high, the tensor cores can't keep up.

This explains why high end cards can't use DLSS at lower resolutions.

Lower-end cards on the other hand have so few tensor cores, that they can't keep up with the high resolution to begin with. This doesn't explain why the 2060 can use DLSS at 4k in Battlefield, though.

The blur that dlss sucks in BF5. I'll probably just leave it off for now. I was fine with the frame rates before with ray tracing on ultra
 
Last edited:
  • Like
Reactions: happy medium

n0x1ous

Platinum Member
Sep 9, 2010
2,572
248
106
DLSS is indeed a blur-fest, though so is TAA. The new update improves RTX performance again though and I am seeing between 50-70 fps at 4k ultra rtx ultra DLSS off which looks amazing. Very playable for single player, but will turn RTX off for multiplayer for more frames.
 
  • Like
Reactions: ZGR

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
What "technical limit" allows the 2060/2070/2080 to work but not the 2080TI, as in the case of BF5 @ 1440p?


But the forum was telling us RTX would become much more viable when combined with DLSS. Now we're being told such a combination is a "special case"?


Really? You're not at all surprised at the BF5 example above?

Many low end cards can't play high resolutions and/or high settings in newer games at a good framerate. Should these features also be disabled when nVidia decides to?

It's painfully obvious what's really going on here:
  1. Either DLSS is more of performance hit than we're led to believe, and nVidia is masking it by cherry-picking the allowable use cases.
  2. Or certain combinations of GPU/RTX/DLSS/resolution don't show enough of a difference between the cards to justify their exorbitant cost differences, so nVidia simply disables the "bad" combos so they can't be benchmarked.
So even if you have an overpriced Turding and the game supports DLSS/RTX, you still have to have nVidia's "blessing" before they allow it, with absolutely no ability to discern what will happen in future games and future hardware. The next DLSS/RTX game may not even allow it at your resolution, and you've paid $1200 for the "privilege" of finding this out.

Where does it say on the GPU box "DLSS/RTX will be arbitrarily restricted at nVidia's discretion"?

While I basically agree with the gist of this perspective, I think it would be better received without the 'Turding' and more aggressive ways of telling people how negatively you view this lineup and features. Now I am a relative newcomer at only 2006, and I am far from a mod or even telling you how to go about things, just sharing my perspective.

Indeed I do think that DLSS is particularly disappointing. Envisioning an alternative design which spent 0% of the transistor budget on Tensor (seemingly about a quarter of the die, if our rough estimates hold water), it would have been possible at the same price point or less even (less R&D on Tensor altogether?) to have made a GPU lineup with up to 33% increased performance, using standard design and the RTX cores. Or, skipping RT AND Tensor, creating something roughly double the performance or approaching that, assuming they could feed such a monster enough bandwidth.

Following this logic further, a standard design could have led to dramatic performance increases and reasonable prices with a more balanced approach in die sizes. RTX 2080ti would be half the die size or so with no tensor/RT, and on down the lineup. If they cut the size 25% in combination with cutting RT and Tensor, the lineup could have been something like this in their 12nm lineup :

GT 1150 4GB ~= 1050ti for $89

GTX 1150ti 4GB ~= GTX1060 3-6GB for about $169

GTX 1160 6GB ~= GTX1070ti for $249

GTX 1170 8GB ~= GTX1080OC for $349

GTX 1180 12GB ~= GTX1080ti+ for $459

GTX 1180ti 16GB ~= 1080ti+45% for $699

Then save the RTX features for pro cards for a few generations to really get the tech truly refined, while charging stunning ASPs for those VCs and various IPO innovators that are sinking into DL/AI, etc. Once a good time was seen to start merging with GTX and consumer levels, boom, look at it then.

I don't feel Nvidia is being greedy or evil here, but looking at the sales, performance, pricing, and effective reality of RTX/Tensor for the average PC gamer, it looks just incredibly underwhelming and overpriced. We just saw a 2+ year window of 10xx series finding homes with many gamers, and there is nearly zero reason for any of them to buy RTX unless they can afford the 2080ti, as every other performance level was already met by the products they bought, at prices that are basically the same. IOW, say a 1080ti buyer @ $699 wants to buy another card for $700. Nothing available makes sense to do so. Same with basically every level, there is nothing available for what they paid that offers any reason to do so this time.

Worse, with Vega 7 hitting like a damp squib, there is nothing compelling from the competition either.

There remains a massive market that wants compelling options from $150, $200, $250, and $300 ranges. Failing to bring good options to these customers is truly missing the boat.
 

Mopetar

Diamond Member
Jan 31, 2011
7,797
5,899
136
Anyways, it's kind of curious, how NV, who basically stripped out compute and unnecessary functionality, to make a "pure gaming" card, in Maxwell, would switch to an "everything, including the kitchen sink (AI/DL)" approach, for a consumer gaming card in the current time-frame.

It’s not that curious if you look at the historical context. AMD was becoming very competitive with NVidia at that time and needed an overhaul to make their architecture more efficient.

It’s kind of funny because at the same time NVidia was scaling back their compute, AMD was pushing more of it in GCN which meant they weren’t as efficient. As a result NVidia doesn’t need to be as lean.

What will be really funny is if AMD pulls the same move as NVidia and history repeats itself. It will still be another couple of generations at least until ray tracing really takes off and is ready for the mainstream, but if AMD makes a much leaner pure gaming card, NVidia might be forced to do the same in turn again.
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
It will still be another couple of generations at least until ray tracing really takes off and is ready for the mainstream

In 2020/2021 consoles will be using Ray Tracing.

It will be mainstream in 2 1/2 years when a 5nm rtx4060 with double the RT cores of a RTX Titan with the proper drivers/software runs AAA games @ 4k @ 100fps and a 24gb RTX4080ti with 3 times the 2080ti bandwidth is doing 8k with RT without breaking a sweat.

AMD will be using Ray tracing also,(software) and then EVERYONE on this forum will be praising it.
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
I think RT will eventually take off, but I don't think RTG will have enough to fit on a PS5/XBXX APU by 2020 launch date. Not enough die space or even research from them for that I don't think.
 

beginner99

Diamond Member
Jun 2, 2009
5,208
1,580
136
Following this logic further, a standard design could have led to dramatic performance increases and reasonable prices with a more balanced approach in die sizes.

fully agree. the issue is lack of competition. nv didn't need to make a consumer chip this round because competition from AMD is non-existent. so much cheaper to reuse the pro-chips and little to no performance/$ increase.
 

Mopetar

Diamond Member
Jan 31, 2011
7,797
5,899
136
In 2020/2021 consoles will be using Ray Tracing.

All of this is wishful thinking. The next generation of consoles are supposedly based on Navi and won’t have ray tracing and wouldn’t be nearly powerful enough if they did include it which would be a total waste. They’ll be out late this year at the earliest. That means another ~5 years before the next cycle.
 
  • Like
Reactions: Qwertilot

happy medium

Lifer
Jun 8, 2003
14,387
480
126
All of this is wishful thinking. The next generation of consoles are supposedly based on Navi and won’t have ray tracing and wouldn’t be nearly powerful enough if they did include it which would be a total waste. They’ll be out late this year at the earliest. That means another ~5 years before the next cycle.
ray tracing does not need RT cores in hardware to function, its software based in direct x 12.
Navi could do ray tracing.
Consoles use a custom gpu based on PC gpu's.
And consoles will be out late next year 2020 or early 2021.
 

Mopetar

Diamond Member
Jan 31, 2011
7,797
5,899
136
ray tracing does not need RT cores in hardware to function, its software based in direct x 12.

Then RT is nothing special since the Xbox One X is already capable of doing it. Only no one wants to because the hardware isn't powerful enough.

The software based implementations aren't good enough and even the special hardware that NVidia is using isn't good enough. Now that 4k TVs have become dirt cheap, the push will be to have consoles capable of delivering an acceptable experience at that resolution.

The performance penalty from ray tracing would likely mean a console struggling to provide acceptable performance even at 1080p.
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
Then RT is nothing special since the Xbox One X is already capable of doing it. Only no one wants to because the hardware isn't powerful enough.

The software based implementations aren't good enough and even the special hardware that NVidia is using isn't good enough. Now that 4k TVs have become dirt cheap, the push will be to have consoles capable of delivering an acceptable experience at that resolution.

The performance penalty from ray tracing would likely mean a console struggling to provide acceptable performance even at 1080p.
21 months from now is a long way off, im sure ray tracing will be performing much much better.
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
21 months from now is a long way off, im sure ray tracing will be performing much much better.

I understand your optimism, but think about it. 2080ti is only really able to do what will be looked back upon as pretty rudimentary Raytracing effects with an absurdly massive did size, with a quarter of those billions and billions (Carl Sagan voice) of transistors dedicated to RT specific modules.

Now it's no accident that they had to make specific dedicated portions of the Turing design even to achieve this. Emulating RT using traditional cores is hugely less efficient, however interesting it is to see. It's like an order of magnitude less efficient, making it basically unworkable in any AAA gaming context.

Now looking forward to Navi and PS5, you see an even larger challenge. A full Navi 20 with a massive amount of dedicated memory and gigantic die size would be basically unable to do even token Raytracing in a useful manner. But in an APU scenario, trying to fit into ~160W or less in a console box WITH probably between 4 and 8 full Ryzen cores, in a die size that will certainly be smaller than 2080ti, you can see the math just doesn't add up.

At a hopefully more mature 7nm, ~600mmish, 160W, a hypothetical Ryzen/Navi APU will probably end up 10-12TF optimistically if yields are good and leakage is manageable. This is still gigantic, and if I had to guess, we're going to see really aggressive dynamic power and clock management much more sophisticated compared to the Jaguar APUs, which will allow certain areas to clock up under load by cutting back or idling other sections. But even at peak, with a vapor chamber cooling design, it will still come in far weaker than a PC config because of the limitations in size and thermal output. In a desktop, you can have a fat 7nm Ryzen 8-Core at 4Ghz+ only because you can sit it under a huge HSF, ditto a 2080ti with massive TF, under a highly engineered cooling design to keep up. In a console, the single cooler design and power budget will have to handle both.

Eg; desktop Ryzen at say ~100W + DGPU at 250-300W. 350W or so just there with no other component considerations.

I mean, I would like to believe that PS5 will be able to do RT, but it's pretty slim in the probability, especially considering the huge push from AAA games publishers to chase 4k right now, and PS4 Pro not quite managing 4k in most games thus far.

Maybe 2023ish, PS5 Pro and new APU @ 20+ TF and a Navi2. But PS5 for fall 2020? That thing is almost certainly already taped out or close to it, and nearing final production stages, so that other engineering teams can start working on PCBs, memory interfaces, OS/API work, getting Dev kits through prealpha, alpha in-house stages, just so they can get final prelaunch devkits out to key partners in time for launch titles. It's an absolute crapton of work they have to all get in order, and waiting for a future possible response to RTX really isn't feasible under any foreseeable circumstances that I can see.
 
  • Like
Reactions: beginner99

happy medium

Lifer
Jun 8, 2003
14,387
480
126
I understand your optimism, but think about it. 2080ti is only really able to do what will be looked back upon as pretty rudimentary Raytracing effects with an absurdly massive did size, with a quarter of those billions and billions (Carl Sagan voice) of transistors dedicated to RT specific modules.

Now it's no accident that they had to make specific dedicated portions of the Turing design even to achieve this. Emulating RT using traditional cores is hugely less efficient, however interesting it is to see. It's like an order of magnitude less efficient, making it basically unworkable in any AAA gaming context.

Now looking forward to Navi and PS5, you see an even larger challenge. A full Navi 20 with a massive amount of dedicated memory and gigantic die size would be basically unable to do even token Raytracing in a useful manner. But in an APU scenario, trying to fit into ~160W or less in a console box WITH probably between 4 and 8 full Ryzen cores, in a die size that will certainly be smaller than 2080ti, you can see the math just doesn't add up.

At a hopefully more mature 7nm, ~600mmish, 160W, a hypothetical Ryzen/Navi APU will probably end up 10-12TF optimistically if yields are good and leakage is manageable. This is still gigantic, and if I had to guess, we're going to see really aggressive dynamic power and clock management much more sophisticated compared to the Jaguar APUs, which will allow certain areas to clock up under load by cutting back or idling other sections. But even at peak, with a vapor chamber cooling design, it will still come in far weaker than a PC config because of the limitations in size and thermal output. In a desktop, you can have a fat 7nm Ryzen 8-Core at 4Ghz+ only because you can sit it under a huge HSF, ditto a 2080ti with massive TF, under a highly engineered cooling design to keep up. In a console, the single cooler design and power budget will have to handle both.

Eg; desktop Ryzen at say ~100W + DGPU at 250-300W. 350W or so just there with no other component considerations.

I mean, I would like to believe that PS5 will be able to do RT, but it's pretty slim in the probability, especially considering the huge push from AAA games publishers to chase 4k right now, and PS4 Pro not quite managing 4k in most games thus far.

Maybe 2023ish, PS5 Pro and new APU @ 20+ TF and a Navi2. But PS5 for fall 2020? That thing is almost certainly already taped out or close to it, and nearing final production stages, so that other engineering teams can start working on PCBs, memory interfaces, OS/API work, getting Dev kits through prealpha, alpha in-house stages, just so they can get final prelaunch devkits out to key partners in time for launch titles. It's an absolute crapton of work they have to all get in order, and waiting for a future possible response to RTX really isn't feasible under any foreseeable circumstances that I can see.
https://forums.anandtech.com/thread...onsoles-rt-cores-aren’t-the-only-way.2561494/

"In terms of the viability of ray tracing on next generation consoles, the hardware doesn’t have to be specifically RTX cores. Those cores aren’t the only thing that matters when it comes to ray tracing. They are fixed function hardware that speed up the calculations specifically relating to the BVH intersection tests. Those calculations can be done in standard compute if the computer cores are numerous and fast enough (which we believe they will be on the next gen consoles). In fact, any GPU that is running DX12 will be able to “run” DXR since DXR is just an extension of DX12."
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
It's theoretical, but the biggest problem is that even the Xbox One X at 6+TF is having trouble with 4k gaming in many titles. You really want about 10-12 for viable 4k without cutting back on a few things. To emulate what RTX does with the dedicated hardware takes a LOT, and if Nvidia felt like the only way to do cursory raytracing is to develop specialized hardware that's way more efficient at it, it's just not going to he possible on the smaller APUs with much much much less die space. It will he fighting to get enough juice to make 4k HDR native gaming possible, let alone just dialing back so far that you could give like 75% of the GPU a job that it would still be worse at than a 2060.
 
  • Like
Reactions: beginner99