• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

TPU: many combinations of GPU/resolution/RTX/DLSS not allowed

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
I can not remember that i have ever made such a claim. Can you please share a quote?



Future nodes? You do not seem to realize that Moores Law is slowing down and that heterogenous computing becomes more important in the future..

In addition for gaming you always need to solve global illumination and other things where currently (without raytracing) only very crude approximation methods are available. So its not like the RT units staying idle in gaming workloads.
I meant games that exist right now with no RT usage, so a GPU with general purpose cores that can also do RT will have an advantage in such games over another GPU that reserves part of the area for specialized hardware, due to being able to fit more cores. Tensor cores make it even worse. That's all.

By the way, even if node advancement is slowing, it still exists, with 7nm bringing a 100% improvement, with 5 & 3nm coming in the next few years, bringing another 100% at least. Not trivial in my opinion.
 
I meant games that exist right now with no RT usage, so a GPU with general purpose cores that can also do RT will have an advantage in such games over another GPU that reserves part of the area for specialized hardware, due to being able to fit more cores. Tensor cores make it even worse. That's all.

I was talking about future games only and not about legacy stuff. You cannot have both faster raytracing performance and lower prices when not having RT cores. In particular i made the reference to consoles, where everything is about balance pricing.
Tensor cores are pretty much use-less in a gaming oriented product - agreed.
 
Well no Tensor cores for the 1660Ti. I'm not surprised given there was no way they could offer DLSS without cannibalizing the 2060. The 1660TI is better off without them, anyway.
 
DLSS is a good concept but seems poorly implemented. Even if they fix the blurriness, the fact that it needs to be trained on each game and resolution separately means it will remain a niche feature and not widely supported across all games.

I think there is a larger discussion that needs to be had about the future of the dGPU and the lack of titles worth buying a shiny new GPU for. I think us consumers, pro-users, enthusiasts, etc. need to step back and re-evaluate what kind of card WE want to buy rather than being told by AMD/nV new tech justifies charging twice as much as last gen.

And willingly paying for it

I agree, I don't see much incentive to keep buying new video cards at this point. You can get a slightly higher resolution or refresh rate, but the actual game graphics have not improved much for years now. It seems like any significant improvements from here would make the games prohibitively expensive to develop and won't be happening, especially with the big AAA companies already struggling to come up with good business models for games. For most existing games, the 1080 Ti is generally good enough even at 4K, even if I have to turn down a few settings that make no difference visually.
 
DLSS is a good concept but seems poorly implemented. Even if they fix the blurriness, the fact that it needs to be trained on each game and resolution separately means it will remain a niche feature and not widely supported across all games.

That's how it had to be though? Otherwise you've just got a fixed hardware upscaler. NV do the training on their computers etc, so its a really cheap thing for developers to support.

NV do clearly need to get quite a bit better at actually supporting it/doing the learning etc though 🙂
 
That's how it had to be though? Otherwise you've just got a fixed hardware upscaler. NV do the training on their computers etc, so its a really cheap thing for developers to support.

NV do clearly need to get quite a bit better at actually supporting it/doing the learning etc though 🙂

Given how hot machine learning is right now, do you really think Nvidia is going to be just giving away training time on their supercomputer?
 
Given how hot machine learning is right now, do you really think Nvidia is going to be just giving away training time on their supercomputer?

Can AI training be run across a DC grid? Maybe NV needs to build in training software into their driver stack and let buyers do the training. With an opt-in.
 
Given that NV already have the training hardware & its one of the features they're using to try and push RTX?

They might plausibly be giving the time away, yes.
 
Back
Top