Confirmed: you don't need RTX for raytracing

BFG10K

Lifer
Aug 14, 2000
22,672
2,816
126
https://www.3dcenter.org/news/raytr...uft-mit-guten-frameraten-auch-auf-der-titan-v

Tested using BF5 on a Volta with no RT cores. The performance gain from RTX is at best 45% which means all the time Jensen was screaming "10 Gigarays!" on stage, he neglected to mention Volta could already do 7 Gigarays.

So it took them ten years to get a 45% raytracing performance boost over traditional hardware. Wow.

You can bet nVidia will never unlock the feature on the likes of a 1080TI given it would beat a 2060 and maybe even the 2070 in raytracing, yet again proving what garbage these cards really are.

Turding is a fraudulent scam.
 
Last edited:

maddie

Diamond Member
Jul 18, 2010
4,722
4,625
136
https://www.3dcenter.org/news/raytr...uft-mit-guten-frameraten-auch-auf-der-titan-v

Tested using BF5 on a Volta with no RT cores. The performance gain from RTX is at best 45% which means all the time Jensen was screaming "10 Gigarays!" on stage, he neglected to mention Volta could already do 7 Gigarays.

So it took them ten years to get a 45% raytracing performance boost over traditional hardware. Wow.

You can bet nVidia will never unlock the feature on the likes of a 1080TI given it would beat a 2060 and maybe even the 2070 in raytracing, yet again proving what garbage these cards really are.

Turding is a fraudulent scam.
Implications are that we can see a modest amount of RT by the other side aka AMD used a lot more than thought possible. The arguments that the lead is multi-generational for the RT tech falls flat.
 
  • Like
Reactions: darkswordsman17

Guru

Senior member
May 5, 2017
830
361
106
Since ray tracing is a DX12 MS implementation directly in their latest win10, it can be run at a software level. As you said the RT cores do seem to provide a small advantage over normal processing, but not by that much.
 
  • Like
Reactions: happy medium

TheELF

Diamond Member
Dec 22, 2012
3,967
720
126
5120 Shader-Einheiten an einem 3072 Bit HBM2-Interface) gegenüber der Titan RTX (4608 Shader-Einheiten an einem 384 Bit GDDR6-Interface

11% less shaders,and 800% less memory bandwidth....45% faster is still awesome.
Also performance is not just FPS, how much each card gets utilized and how much they are idle is also part of performance.
 

NTMBK

Lifer
Nov 14, 2011
10,208
4,940
136
5120 Shader-Einheiten an einem 3072 Bit HBM2-Interface) gegenüber der Titan RTX (4608 Shader-Einheiten an einem 384 Bit GDDR6-Interface

11% less shaders,and 800% less memory bandwidth....45% faster is still awesome.
Also performance is not just FPS, how much each card gets utilized and how much they are idle is also part of performance.

How the hell do you get 800% less memory bandwidth?
 

Hitman928

Diamond Member
Apr 15, 2012
5,177
7,628
136
Also, although the RTX has less shaders, it also has a higher base and boost clock resulting in a net higher compute performance (single precision).

Titan V -> 5120 CUDA cores and 1455 MHz boost clock = 14.899 TFLOPS
Titan RTX -> 4608 CUDA cores and 1770 MHz boost clock = 16.312 TFLOPS

So Titan RTX has 9.5% faster compute performance and 2.9% more memory bandwidth than a Titan V.
 

Hitman928

Diamond Member
Apr 15, 2012
5,177
7,628
136
The performance gain from RTX is at best 45% which means all the time Jensen was screaming "10 Gigarays!" on stage, he neglected to mention Volta could already do 7 Gigarays.

If you setup a test where it was basically a pure ray tracing benchmark to try to saturate the RTX cores as much as possible, I have no doubt the RTX would crush the Titan V by probably a factor of 3x or more. However, in a real gaming environment even the Titan RTX isn't fast enough to load the scene with ray tracing. So we get this hybridized approach with a bunch of compromises and even then the CUDA cores of the RTX cards end up idling a lot, waiting on the RTX cores to finish their calculations so that leaves a lot of overlap area where another card with similar compute performance can fully utilize the compute cores calculating both the ray tracing and rasterizing paths and thus come within spitting distance of the other card.

It will be interesting to see going forward with additional optimizations and implementations how the disparity between the two will change. With such a small niche of cards (at this point) being able to run "hardware accelerated" ray tracing, it's hard to imagine that developers will take the time to truly optimize a game to take the best advantage of the available RTX cores.
 

TheELF

Diamond Member
Dec 22, 2012
3,967
720
126
He's only looking at the width of the interface and ignoring the memory clocks. Sounds a lot better though when you say 800%.
Well they didn't state the mem clocks,all I see on the german page is this.
So why does it have such a wide interface if it's useless?Is it useless?Marketing?
 

GodisanAtheist

Diamond Member
Nov 16, 2006
6,715
7,004
136
Well they didn't state the mem clocks,all I see on the german page is this.
So why does it have such a wide interface if it's useless?Is it useless?Marketing?

Different types of memory tech. HBM uses an extremely wide bus with modestly clocked memory while GDDR memory uses a comparatively narrow bus with absurdly high clocks. Just different ways of accomplishing the same thing.

Different strengths and weaknesses that largely have nothing to do with actual performance, however.
 
  • Like
Reactions: darkswordsman17
Mar 11, 2004
23,030
5,495
146
Turing just really seems like it was Nvidia trying to shoehorn a justification to consumers to buy these big chips. I'm not declaring anything (Nvidia is still in a solid position, AMD has yet to provide anything to really concern them, and Intel will take time to get up and running well - and I even have wonder if we might not see a big IP battle happen once Intel releases their GPU), but that Nvidia felt they needed to push these cards (that to me, seem like they're pro cards, render cards, but still pro cards and that RTX was intended for those markets and not consumers til 7nm) to gamers, makes me wonder if there's not trouble with their 7nm plans, or their plans in general. I don't know if Volta was supposed to make it to consumers, but we never got Volta gaming cards. Honestly, I think Pascal was still more than enough for gamers (just with some price drops, or maybe porting it to 12nm, especially if they'd used it to add more processors), to hold things over til 7nm. But this makes me wonder if we'll be seeing that anytime soon from Nvidia outside of maybe some high end Enterprise (like we got with Volta). Or if they found out that there was such little demand for Turing in the pro market that the only way they could recoup the development cost and get good enough economies of scale was to push them to gamers too.

Implications are that we can see a modest amount of RT by the other side aka AMD used a lot more than thought possible. The arguments that the lead is multi-generational for the RT tech falls flat.

Considering how half-baked this ray-tracing API stuff is, had a hunch that it'd be best dealt with by implementing it in the traditional raster pipeline, initially with software, and then figuring out how the hardware needs to change to improve performance. So basically you'd be better off putting those transistors to work just adding traditional rasterize cores and brute forcing as much as you can. On top of that the DLSS stuff seems like it'd be the same way, where you just have supercomputer come up with an algorithm to offer the best perceived quality for a few targets (for instance native resolution of the display; could factor in viewing distance as well) and then adjust the game settings for that, with no need for specialized function hardware other than maybe on the cloud/server side that is doing the actual deep learning analysis).

And the bonus being that traditional games would also see a boost (due to the extra grunt of the added processors). Plus you could do DLSS on older games and then tailor the settings to provide that image quality as well. But with how it is now, its basically relegated to RTX cards and they seemed to have emphasized it as a forward facing feature. It makes me wonder if this isn't their "fix" for how people have claimed that Nvidia's cards suffer in performance over time as they focus resources on the more recent architectures, letting their older ones languish - with some claiming its outright intentional and/or they deliberately sabotage older performance to push people to newer ones (I'm not saying I agree with those claims but they do exist, and this way instead of trying to have people run GFE stuff to figure out what settings to use they just turn DLSS algorithm on).
 
  • Like
Reactions: Arkaign

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
Considering how half-baked this ray-tracing API stuff is, had a hunch that it'd be best dealt with by implementing it in the traditional raster pipeline, initially with software, and then figuring out how the hardware needs to change to improve performance. So basically you'd be better off putting those transistors to work just adding traditional rasterize cores and brute forcing as much as you can. On top of that the DLSS stuff seems like it'd be the same way, where you just have supercomputer come up with an algorithm to offer the best perceived quality for a few targets (for instance native resolution of the display; could factor in viewing distance as well) and then adjust the game settings for that, with no need for specialized function hardware other than maybe on the cloud/server side that is doing the actual deep learning analysis).

And the bonus being that traditional games would also see a boost (due to the extra grunt of the added processors). Plus you could do DLSS on older games and then tailor the settings to provide that image quality as well. But with how it is now, its basically relegated to RTX cards and they seemed to have emphasized it as a forward facing feature. It makes me wonder if this isn't their "fix" for how people have claimed that Nvidia's cards suffer in performance over time as they focus resources on the more recent architectures, letting their older ones languish - with some claiming its outright intentional and/or they deliberately sabotage older performance to push people to newer ones (I'm not saying I agree with those claims but they do exist, and this way instead of trying to have people run GFE stuff to figure out what settings to use they just turn DLSS algorithm on).

Given how much ray-tracing that has been used in gaming up until now (zero), Nvidia was not going to go heavy on that aspect of these new cards. Given how much time developers have spent creating code using ray-tracing, nothing they create, at this point in time, is going to be a polished and well designed.

This is a jumping off point. Much like the HD 5870 was with tessellation, but even worse off, as it even more demanding, and more of a departure of what was done before.

While they are trying to use it as a selling point, we all know this is just them putting their toes in the water, to get the ball rolling for the future. If they never did anything, ray-tracing would never come around. As it is now, we can expect to see some experimentation for a few years, and possibly a shift in design in a few more.
 

psolord

Golden Member
Sep 16, 2009
1,875
1,184
136
So all that means that AMD has the option to create a non native RT, but still compute behemoth gpu, that will do epic old school rendering and will come close to RT workloads anyway. Ok for the higher end cards at least.
 
  • Like
Reactions: Arachnotronic

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
Volta has more Tensor cores than Turing. That's probably why Volta does okay at RT.

Pascal has no tensor cores, I think?
 

NTMBK

Lifer
Nov 14, 2011
10,208
4,940
136
Volta has more Tensor cores than Turing. That's probably why Volta does okay at RT.

Pascal has no tensor cores, I think?

Does Battlefield actually use the tensor cores for its raytracing? I thought that they were only used by the deep learning upscaler part of "RTX".
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
Does Battlefield actually use the tensor cores for its raytracing? I thought that they were only used by the deep learning upscaler part of "RTX".
I doubt it's a coincidence that the 2 newer gpus that have Tensor cores can do ray tracing pretty well, and the one that does not, can't?
 

Hitman928

Diamond Member
Apr 15, 2012
5,177
7,628
136
The tensor cores can be used for denoising after the ray tracing is complete, but bf5 doesn't use them at this time, they use their own algorithm that uses the traditional Cuda cores.