It's not the next big thing in graphics and it will never become possible with current transistor based silicone processors.
Currently it cuts frames by 66% and doesn't even look that good.
Moores law has basically stopped die shrinking and now we are at the point of 5 year cycles on a real node shrink. Gpus are becoming massive and the cost to match.
So if a £1500 gpu produces 33% of the required performance at current node and the next node gives 25% more performance at the same power. That is £4500 at current frames which is £3375 on 7nm. So in 5 years time with the same improvement you can expect £2571 for a further 25% improvement in RT. So in 2 new nodes you are getting 56% more frames in ray tracing than the current 33% of current frame rate. Which means that you are getting 51% of current normal fps in 2 node shrinks.
The beauty of gpu silicone is that it scales very well and performance is pretty linear so unless a miracle new arch comes out explain to me how this is going anywhere no technogoly has ever been maintain in the pc space by the elite hardware also if it is not mainstream it dies
Tech will start to focus on things like research into new materials and more importantly lowering manufacturing costs, you can get a long way on lowering manufacturing costs because then you can start simply adding more GPUs and start working on rendering that can split the load between many GPUs a lot better than say SLI.
It's almost always paradigm shifts that allow tech to continue advancing, people pick the easy route which is do more of the same but better and faster until that gravy train runs out and then they invest in R&D to do something fundamentally newer.
RT is tough on cards, but there's like 1/3rd of the GPU dedicated to it right now, future chips will have more transistors in them and I'd be willing to bet they end up being dedicated towards something like RT rather than Rasterization. If you double the transistor count down to 7nm you're not necessarily going to keep the ratio of RT cores to FP32 the same, you're going to slowly reduce the ratio so more and more is dedicated to RT. The 2080 can already slay raster games in 4k at 60fps no problem, even the AAA titles. When Nvidia make a 3080 or whatever next gen will be called, I doubt many more of those transistors are going on FP32.
Part of this argument I'd say is about flipping the question upside down and saying what would you spend all that power on if you dedicated a 2080 or 2080TI chip to just rasterization? You're talking probably 2x the speed they are now in total. Running 4k games at 120hz instead of 60? There's a niche market if there ever was one. 8k wont be even remotely mainstream for another 5+ years. Multi monitor is also really niche.
You could argue, spend it on better effects and lighting and whatnot, but we've really reached the point where faking it further is going to give diminishing returns, if we want better quality we need a better rendering method. So RT was inevitable it's just a case of when does it most make sense to make the leap and I think now makes a lot of sense despite the fact that it's a painful one. If Nvidia had dedicated an entire 2080 to just rasterization, do you know what all the reviews would read right now? They'd read, there's literally no point in anyone with a GPU from the last 3-4 years getting on of these cards unless you're on of the exceptionally few people that run in 4k+ (multi monitors with a rest totaling more than 4k) or someone who wants 4k 100+fps, both infinitesimally small markets.