This has been discussed and dissected at length over at :
https://forum.beyond3d.com/forums/rendering-technology-and-apis.40/
in 2018.
Of course you don't need RTX to do Raytracing.
RT cores are nothing more than intersection test accelerators.
The quoted gigaray figures are from an unbelievable absurd scenario whereby you have a single polygon and you're slamming it head on with 'rays'...No divergence of any sort. The card then tells you whether or not the ray hit the polygon or not. Essentially a flat pipeline throughput test. Completely snake oil marketing. The real computing happens in the ray generation, acceleration structure :
BuildRaytracingAccelerationStructure() , and ray divergence scenarios (reflections/etc). This is why BF performance was crap for some time.. Because of ray divergence and real-world ray characteristics that are not optimal for GPUs. In order to solve it, tons of hacks and shortcuts are added to avoid the same scenarios that limited performance on GPUs w/o ray tracing hardware. This and the fact that it's a
hybrid pipeline that involves the CUDA cores and other portions of the GPU that in effect extend the frame calculation time and/or consume resources used for traditional rendering is why you take a
yuge FPS hit when you enable this feature.
So, underneath the covers, this is a complete gimmick and prototype level feature more suited for offline rendering acceleration (Quadro series). Geforce users got conned and raked over the coals on price in order to subsidized a Quadro card. Before and after launch, this was the intelligent consensus... People detailed it much to the naysayers who praised Jensen. It's just being confirmed by various outlets now.
If you want to watch a real talk on this tech w/ no B.S, head over to :
https://developer.apple.com/videos/play/wwdc2018/606/
You can do meme-tracing acceleration on an Ipad if you wanted. It's just software. Apple even demos their software partitioning the task to an eGPU w/ an AMD GPU. You're getting like 2-8x speed up max .. :
real world gig-a-ray performance is about half or even less than the marketed figures ... If a 1080 does 400Megarays.. a 2080 does about 3.2 Gigarays (0.4 gigarays x 8). A boon for offline rendering... An absolute gimmick for real-time game rendering. There are tons of hacks game developers will implement (which lands you right back into the pre-baked graphics era) to hide this but you see it clearly in the significant FPS drop.
As always, Leather jacket manned conned his gullible consumer base for max profit. Following in the same tragic footsteps as Intel...