Confirmed: you don't need RTX for raytracing

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BFG10K

Lifer
Aug 14, 2000
22,709
2,958
126
https://www.3dcenter.org/news/raytr...uft-mit-guten-frameraten-auch-auf-der-titan-v

Tested using BF5 on a Volta with no RT cores. The performance gain from RTX is at best 45% which means all the time Jensen was screaming "10 Gigarays!" on stage, he neglected to mention Volta could already do 7 Gigarays.

So it took them ten years to get a 45% raytracing performance boost over traditional hardware. Wow.

You can bet nVidia will never unlock the feature on the likes of a 1080TI given it would beat a 2060 and maybe even the 2070 in raytracing, yet again proving what garbage these cards really are.

Turding is a fraudulent scam.
 
Last edited:

ub4ty

Senior member
Jun 21, 2017
749
898
96
This has been discussed and dissected at length over at :
https://forum.beyond3d.com/forums/rendering-technology-and-apis.40/
in 2018.

Of course you don't need RTX to do Raytracing.
RT cores are nothing more than intersection test accelerators.
The quoted gigaray figures are from an unbelievable absurd scenario whereby you have a single polygon and you're slamming it head on with 'rays'...No divergence of any sort. The card then tells you whether or not the ray hit the polygon or not. Essentially a flat pipeline throughput test. Completely snake oil marketing. The real computing happens in the ray generation, acceleration structure : BuildRaytracingAccelerationStructure() , and ray divergence scenarios (reflections/etc). This is why BF performance was crap for some time.. Because of ray divergence and real-world ray characteristics that are not optimal for GPUs. In order to solve it, tons of hacks and shortcuts are added to avoid the same scenarios that limited performance on GPUs w/o ray tracing hardware. This and the fact that it's a hybrid pipeline that involves the CUDA cores and other portions of the GPU that in effect extend the frame calculation time and/or consume resources used for traditional rendering is why you take a yuge FPS hit when you enable this feature.

So, underneath the covers, this is a complete gimmick and prototype level feature more suited for offline rendering acceleration (Quadro series). Geforce users got conned and raked over the coals on price in order to subsidized a Quadro card. Before and after launch, this was the intelligent consensus... People detailed it much to the naysayers who praised Jensen. It's just being confirmed by various outlets now.

If you want to watch a real talk on this tech w/ no B.S, head over to :
https://developer.apple.com/videos/play/wwdc2018/606/

You can do meme-tracing acceleration on an Ipad if you wanted. It's just software. Apple even demos their software partitioning the task to an eGPU w/ an AMD GPU. You're getting like 2-8x speed up max .. :
real world gig-a-ray performance is about half or even less than the marketed figures ... If a 1080 does 400Megarays.. a 2080 does about 3.2 Gigarays (0.4 gigarays x 8). A boon for offline rendering... An absolute gimmick for real-time game rendering. There are tons of hacks game developers will implement (which lands you right back into the pre-baked graphics era) to hide this but you see it clearly in the significant FPS drop.

As always, Leather jacket manned conned his gullible consumer base for max profit. Following in the same tragic footsteps as Intel...
 
Last edited:

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
You need "RTX Technology" to do RT with Nvidia cards. NV cards with Tensor cores do RT much faster than NV cards without Tensor cores. NV cards with Tensor and RT cores, are quite a bit faster at RT.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
Yep, I know that. But nVidia is implying otherwise. The purpose of the thread title is highlight nVidia's own ecosystem proved them wrong.

You are really stretching here.

There is only one exception, a ~$3000 Titan V, that comes close sometimes, according to some guy on a forum.

It would be a slideshow on just about any other non-RTX card.

Also consider that BF5 Ray Tracing was actually developed using Titan V, so it shouldn't come as a surprise that the card used in development actually works...
 

PrincessFrosty

Platinum Member
Feb 13, 2008
2,301
68
91
www.frostyhacks.blogspot.com
https://www.3dcenter.org/news/raytr...uft-mit-guten-frameraten-auch-auf-der-titan-v

Tested using BF5 on a Volta with no RT cores. The performance gain from RTX is at best 45% which means all the time Jensen was screaming "10 Gigarays!" on stage, he neglected to mention Volta could already do 7 Gigarays.

So it took them ten years to get a 45% raytracing performance boost over traditional hardware. Wow.

You can bet nVidia will never unlock the feature on the likes of a 1080TI given it would beat a 2060 and maybe even the 2070 in raytracing, yet again proving what garbage these cards really are.

Turding is a fraudulent scam.

Well that's not quite true because only some fraction of the RTX2000 cores are dedicated to RT ops which means they're doing a not insignificant amount more workload with like 1/3rd of the die space. Cards that can do RT ops also need to still have reasonable raster performance at the same time to take care of the traditional workload portion of the rendering.

And RT is barely usable now as it is, even with huge chips on a smaller node, so the idea that it'd work on traditional hardware is just silly. And yes you don't technically need RT cores for it, all you need to do is be compatible with the new DirectX ray tracing pipeline and have enough horsepower to crunch the numbers, for Nvidia to waste time doing that for hardware that wont get anywhere close to playable frame rates is completely pointless.
 

TestKing123

Senior member
Sep 9, 2007
204
15
81
Did the OP even actually look at the benchmark?

In the ray tracing heavy levels like Rotterdamn, the 2080ti is 30fps faster than a Titan V. That is a significant difference.

If anything, it shows how helpful the RT cores actually are, considering the Titan V has much more tensor cores and cuda cores. Dice used Titan V's to develop ray tracing so it's no surprise it will still work with it. The RT cores just take the workload off and thus, results in that 30 fps difference on the 2080Ti.

However this hasn't even been replicated anywhere, no one has done an image quality analysis to ensure the Titan V is in fact doing ray tracing and not screen space reflections with RT code slowing it down.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Quick, OP send this info to AMD. They can get into the reviewers slide deck before Feb 7th. Nvidia is in for it now! /s

TIL in this thread: my R9 290 in the basement PC is all I need for 4K Modern Games :rolleyes:
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
In the future - yes. Port Royal was designed with DXR in mind. Battlefield 5 was just updated with it. The performance gap will grow between Turing and Volta.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136

BFG10K

Lifer
Aug 14, 2000
22,709
2,958
126
You are really stretching here.

There is only one exception, a ~$3000 Titan V, that comes close sometimes, according to some guy on a forum.

It would be a slideshow on just about any other non-RTX card.
You have no idea about that given there's no way nVidia would open it up to other cards. I bet 1080TI would easily beat a 2060 and possibly the 2070.

Did the OP even actually look at the benchmark?
Sure did.

In the ray tracing heavy levels like Rotterdamn, the 2080ti is 30fps faster than a Titan V. That is a significant difference.
30 FPS is 45% faster. If you need help calculating that, let me know.

If anything, it shows how helpful the RT cores actually are, considering the Titan V has much more tensor cores and cuda cores.
Why are you just counting cores as a metric? Like the other guy that was just looking at memory width?

It was already calculated on the other page that RTX has higher SP and memory bandwidth than Volta. Factoring that and the cost of RTX, 46% is an absolutely abysmal result, especially because they claim it took them 10 years to achieve.
 
  • Like
Reactions: kawi6rr

Timmah!

Golden Member
Jul 24, 2010
1,418
630
136
You have no idea about that given there's no way nVidia would open it up to other cards. I bet 1080TI would easily beat a 2060 and possibly the 2070.


It was already calculated on the other page that RTX has higher SP and memory bandwidth than Volta. Factoring that and the cost of RTX, 46% is an absolutely abysmal result, especially because they claim it took them 10 years to achieve.


Please read the article regarding Vray i posted few posts above. I makes it clear that raytracing consists from 2 parts: raycasting (which is accelerated by RT cores) and shading (computed by regular CUDA cores). If the difference between RTX and V is meager 45 percent, its because the BF devs in their quest for optimization to run things in real-time cut down the raycasting part - since BF V uses raytracing just for reflections, they cast rays only on parts of the scene which are reflective, and even that got optimized. If they raycasted more parts of the scene, or perhaps in its entirety, it would be unplayable on both cards, but the perf difference between the cards would be much bigger. This is nicely demonstrated in Port Royal, where the difference between Titans is apparently 2,5x - lot more raycasting happening there compared to Battlefield.

You could argue about hardware not being ready for real-time raytracing yet, or that the cards are way to expensive given what RTX brings to the table from purely gaming standpoint, and you would have a point, this is certainly up to debate....but doubting the functionality of the RT cores based on performance on a single game, its just bizzare. Especially when you can find articles like the one i posted with CEOs of companies like Chaos Group or OTOY talking about RT cores, showing some metrics, how they improve the performance of their respective softwares... surely these guys dont lie.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Volta is not Pascal. Pascal would be much more slower than Volta. It misses all those neat tricks to accelerate Raytracing before using the RT Cores.

You can use the Star Wars Demo. A RTX2070 is twice as fast as GTX1080TI at the same resolution.
 

maddie

Diamond Member
Jul 18, 2010
4,738
4,667
136
When you posted the statement below, it seemed so ignorant of basic memory specifications in contrast to your normal postings, that I had to wonder if someone hijacked your account. Ignoring the memory speeds is quite interesting.

All HBM does is sacrifice clocks and thus voltage for wider interface thus saving power. Wider and slower costs less power than narrower and faster, all else being equal of course.



3072 Bit vs 384 Bit ,it's not a typo it's 8 times as much.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
Volta is not Pascal. Pascal would be much more slower than Volta. It misses all those neat tricks to accelerate Raytracing before using the RT Cores.

You can use the Star Wars Demo. A RTX2070 is twice as fast as GTX1080TI at the same resolution.
Yes, Volta has Tensor cores, Pascal does not.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
TensorCores dont accelerate Raytracing. Volta has the same cache structure like Turing. Combine this with HBM and Volta is more than twice as fast as Pascal.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
TensorCores dont accelerate Raytracing. Volta has the same cache structure like Turing. Combine this with HBM and Volta is more than twice as fast as Pascal.


Performance
The OptiX AI denoising technology, combined with the new NVIDIA Tensor Cores in the Quadro GV100, delivers 3x the performance of previous-generation GPUs and enables fluid interactivity in complex scenes.

It's the Tensor cores that cause Volta to leave Pascal in the dust.