Confirmed: you don't need RTX for raytracing

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BFG10K

Lifer
Aug 14, 2000
22,709
2,971
126
https://www.3dcenter.org/news/raytr...uft-mit-guten-frameraten-auch-auf-der-titan-v

Tested using BF5 on a Volta with no RT cores. The performance gain from RTX is at best 45% which means all the time Jensen was screaming "10 Gigarays!" on stage, he neglected to mention Volta could already do 7 Gigarays.

So it took them ten years to get a 45% raytracing performance boost over traditional hardware. Wow.

You can bet nVidia will never unlock the feature on the likes of a 1080TI given it would beat a 2060 and maybe even the 2070 in raytracing, yet again proving what garbage these cards really are.

Turding is a fraudulent scam.
 
Last edited:

Hitman928

Diamond Member
Apr 15, 2012
5,262
7,890
136
I'm going with NV's explanations of what Tensor cores do regarding RTX.

Whatever Port Royal does, oh well.

Battlefield 5 doesn't use the tensor cores as part of the ray tracing pipeline either. They may at a future point, but they aren't in the current engine. Tensor cores potentially have lots of uses, even outside of what Nvidia has said they can do with DLSS and de-noising, but developers still have to create the code to make use of them, it's not an on/off switch. With time I'm sure we'll see them utilized more.
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
If they aren't using Tensor Cores, then they are leaving performance on the table.
I suspect they want their test to be vendor independent which I guess means you can't use tensor cores as they are specific to Nvidia, where as the ray tracing is part of the DX12 standard.
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,971
126
The performance gap is much wider in Port Royal.
Sure, sure, at a glorious 25FPS on a 2080:


We were told games designed for RTX ray tracing from the ground up would run faster, but Port Royal is running slower than BF5.

Another lie, just like "switch it on with no developer effort and it automatically works everywhere!"
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
Sure, sure, at a glorious 25FPS on a 2080:


We were told games designed for RTX ray tracing from the ground up would run faster, but Port Royal is running slower than BF5.

Another lie, just like "switch it on with no developer effort and it automatically works everywhere!"
I'm not sure I'd call it a lie. The reality is that BF5 only uses limited amounts of ray tracing. A game that uses more RT, is going to have lower performance, even if the game may use it more efficiently.

The point is that even if a game is built for it from the ground up, it's still not fast enough on current hardware to get good FPS if it is used more than as a little bit.

The good news it that they are getting the ball rolling for the future.
 

JasonLD

Senior member
Aug 22, 2017
485
445
136
Sure, sure, at a glorious 25FPS on a 2080:


We were told games designed for RTX ray tracing from the ground up would run faster, but Port Royal is running slower than BF5.

Another lie, just like "switch it on with no developer effort and it automatically works everywhere!"

That is like criticizing GPU's gaming performance with Firestrike framerate.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
I think someone posted that Port Royal is not using the Tensor cores?

Did the company that produced Port Royal, explain somewhere exactly what they are doing?

It would seem that denoising would be a standard part of the Real Time Raytracing stack, and should be part of the DXR API.

If so, the next question is, do the drivers invoke Tensor cores on a generic Ray Tracing denoising DXR API call, or is another case of something like DLSS that requires specific training for each game.

If this is something like DLSS that requires specific optimization, then it is probably fair to ignore it in cross platform testing, and it requires very specific NVidia optimization.
 

Hitman928

Diamond Member
Apr 15, 2012
5,262
7,890
136
Did the company that produced Port Royal, explain somewhere exactly what they are doing?

It would seem that denoising would be a standard part of the Real Time Raytracing stack, and should be part of the DXR API.

If so, the next question is, do the drivers invoke Tensor cores on a generic Ray Tracing denoising DXR API call, or is another case of something like DLSS that requires specific training for each game.

If this is something like DLSS that requires specific optimization, then it is probably fair to ignore it in cross platform testing, and it requires very specific NVidia optimization.

Denoising must be done with current real time ray tracing because of the limited number of rays they are able to use, without it, the ray tracing makes it look like old analog tv static.

Tensor cores aren't used on a generic ray tracing pass, they are Nvidia specific. No one is using them for this purpose at the moment. According to https://www.pc-better.com/port-royal-performance-analysis/ , both BF5 and Port Royale do the denoising as part of the TAA pass(es) with very little performance penalty.
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
Denoising must be done with current real time ray tracing because of the limited number of rays they are able to use, without it, the ray tracing makes it look like old analog tv static.

Tensor cores aren't used on a generic ray tracing pass, they are Nvidia specific. No one is using them for this purpose at the moment. According to https://www.pc-better.com/port-royal-performance-analysis/ , both BF5 and Port Royale do the denoising as part of the TAA pass(es) with very little performance penalty.
Well using the standard gpu cores to do denoise means they can't be used for normal rasterised rendering at the same time. I suspect the tensor cores would do it better and free up the normal cores to do other stuff. A future patch of BF5 is meant to move over to using the tensor cores for denoising.

DX does need to add support for these, but right now the implementation is proprietary Nvidia - not just because they have the cards with it on, but also the all the training hardware and software belongs to Nvidia too.

Hence it's not just a matter of adding AI support, MS would also need the back end AI super computers and to write the software for those super computers to generate the algorithms - something I guess they will do for the Xbox 2 (which is bound to have some form of ray tracing/AI).
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
[QUOTE="Dribble, post: 39715718, member: 177181"
Hence it's not just a matter of adding AI support, MS would also need the back end AI super computers and to write the software for those super computers to generate the algorithms - something I guess they will do for the Xbox 2 (which is bound to have some form of ray tracing/AI).[/QUOTE]

I just read an article that said Xbox 2 will have a Navi gpu with ray tracing confirmed in 2020.
 

Hitman928

Diamond Member
Apr 15, 2012
5,262
7,890
136
Well using the standard gpu cores to do denoise means they can't be used for normal rasterised rendering at the same time. I suspect the tensor cores would do it better and free up the normal cores to do other stuff. A future patch of BF5 is meant to move over to using the tensor cores for denoising.

DX does need to add support for these, but right now the implementation is proprietary Nvidia - not just because they have the cards with it on, but also the all the training hardware and software belongs to Nvidia too.

Hence it's not just a matter of adding AI support, MS would also need the back end AI super computers and to write the software for those super computers to generate the algorithms - something I guess they will do for the Xbox 2 (which is bound to have some form of ray tracing/AI).

According to the benchmarks in the link I provided, tensor cores won't really help performance when it comes to denoising. Applying TAA vs no TAA in that scene resulted in a 1-2% performance penalty, that's to do the full scene, not just the denoising part. However, the tensor cores may be able to do it at the same performance but with higher quality, especially in motion.
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
Odd how doing something the way they designed it to work, actually does, well, work rather better......

Amazing it took this long to get the software support sorted for a demo mind.
 

Mopetar

Diamond Member
Jan 31, 2011
7,837
5,992
136
Is there any difficulty in making the high end Navi chip able to use both HBM2 and GDDR6?

No, but it does take up additional die area. With HBM, the PHY (physical layer) is smaller relative to the overall total width of the memory bus, but I'm not quite sure how it works out in terms of actual die area. GDDR runs faster to compensate for the smaller bus width, and HBM runs slower since the bus is massive in comparison.

I suppose you can look at it in two different ways. If you make a big enough chip, the percentage increase for adding a separate interface to memory is small relative to the size of the chip, but you also have to look at it as a case of making a big chip even bigger.
 
Mar 11, 2004
23,074
5,557
146
Well using the standard gpu cores to do denoise means they can't be used for normal rasterised rendering at the same time. I suspect the tensor cores would do it better and free up the normal cores to do other stuff. A future patch of BF5 is meant to move over to using the tensor cores for denoising.

DX does need to add support for these, but right now the implementation is proprietary Nvidia - not just because they have the cards with it on, but also the all the training hardware and software belongs to Nvidia too.

Hence it's not just a matter of adding AI support, MS would also need the back end AI super computers and to write the software for those super computers to generate the algorithms - something I guess they will do for the Xbox 2 (which is bound to have some form of ray tracing/AI).

The thing is, much of that is done in the final stages, and it'll still be doing that even with ray-tracing. Its part of the final processing stage of the image, and so if the denoising is integrated into that final single pass as with some AA mode all the better. I'm not sure how much you'd really be freeing up there since it'd likely be done anyway. Frankly, I don't get running algorithms in the cloud just to push them to specialized hardware. You could do the same analysis and just use it to tweak already used methods. The other thing is that, without pushing the specialized hardware, you'd likely end up with more traditional rasterization transistors, and that can be used for more. So run analysis and tweak the settings for the traditional rasterizing, which you could already do (wasn't that even the point of GFE?).

Er, you do know Microsoft is the one that made the ray-tracing API, right? You also know they have already been doing a lot of that analysis (they touted extensive analysis feeding the design of Project Scorpio for instance) and were even running AI on their Azure cloud, right? That was something they touted with the One announcement, so its not some sudden thing. I'd guess they're ahead of everyone on AI implementation in gaming (as far as having the hardware in place and software implementation goes). Which, I expect Microsoft will do AI like they did with the One, largely in the cloud (again, I don't get this "figure the algorithim in the cloud only to push it to specialized hardware" approach when you can just skip the specialized hardware and just run it from the cloud).

And I'm skeptical that there will be much beyond superficial ray-tracing support (but maybe we'll see that allegedly faster/easier lighting method that was touted so much by Nvidia early with RTX) on the consoles. Unless Microsoft rolls their own ray-tracing hardware, in which case, RTX might be dead in the water for gaming unless its quite similar or is somehow a lot more efficient so that it could run Microsoft's version without much of a hit, since Microsoft is the one controlling the API, and so they'd likely know what would be good hardware to run what they want from the API. I'm sure they'll tout it, but much like how they touted DX12 on the One, I don't think it'll actually be much of a big deal (maybe they'll have some Geometry Wars esque game that can show it off; Sony has had some similar, where they were like sorta indie or even puzzle games that had big flashy graphics but they weren't terribly complex beyond that like typical modern games are). I personally don't see ray-tracing taking off til they put cloud resources behind it, where it'll provide the grunt to do it more extensively (so it'll actually wow people properly) and with better performance.

The other thing to take into account, Microsoft is already going to be moving at least a half-step towards doing things in the cloud as they move towards streaming. Microsoft themselves said that (they have 2 consoles coming, one is traditional, and then a second one is streaming focused, but will do some local processing since network isn't quite there for full game streaming). I really can't see them pushing specialized hardware to consoles when that stuff is almost guaranteed to be done in the cloud in the future. Furthermore, much of that is just integrated into the traditional GPU pipeline on AMD's GPUs, so it'd be weird to double up on it with more specialized hardware. They'd be smarter to just have the backend do it and stream it.

I expect they'll do some of this stuff and they'll talk it up, but I'm not expecting anything crazy. And the new CPU and GPU should bring substantial improvements, such that I think early games will focus on reaping those benefits (higher resolutions, draw distances, NPCs/onscreen characters, etc).
 

coercitiv

Diamond Member
Jan 24, 2014
6,199
11,895
136
Odd how doing something the way they designed it to work, actually does, well, work rather better......

Amazing it took this long to get the software support sorted for a demo mind.
That's the problem though: if it works well and it's a streamlined process why aren't we seeing it gain immediate traction with existing games? According to Nvidia the gains are fantastic (perf & quality) and yet we keep seeing demos instead of wide adoption.