Maybe any Ray Tracing should be done on the CPU, and leave GPUs to do 3D work, to use those unused CPU cores. CPUs are actually not that terrible at it.
This may be an example where competition doesn't result in good. Nvidia wants a piece of Intel's pie, so they are pushing ray tracing. Less work for CPU means more GPU demand.
Again, Moore's Law slow death is going to bring about a change in this line of thinking or no one will benefit.
We had nearly free, enormous gains for decades. Now that's gone, so frantically everyone is trying to find a solution to keep up the pace of advancements. It will not come without sacrifices, unfortunately.
The difference between RTX and older changes like hardware T&L, and programmable vertex shaders is that in the old days you had enormous gains to be had from increasing TDPs and moving to a new process. Both of which are becoming a precious resource.
Delay over PCIe bus kills ideas like this, sadly. I mean Gabe was using the same criticism about Physx back in the day, on an external accelerator card, which it was originally designed for, before Nvidia bought the technology. If you want the physics to interact with game logic then the game logic has to fetch the state of the physX which is very slow. That's why most of the advanced PhysX features, the stuff that was too slow for the CPU and required the hardware acceleration, never interacted with the game logic. You could have a nice cloth flag which could blow in the wind and tear when shot but those cloth fragments couldn't interact with game logic in real time, you could for example do AI line of sight calculations with it, they'd just look straight through it.
Bsides games are still starving for more CPU cores, even with DX12 and better CPU usage, modern games can push an 8 core processor to its limits, especially if what you demand is a high frame rate. I suspect with CPUs being so general purpose that they'd be fairly slow at RT calculations anyway.
a secondary card dedicated to RTX would make more sense. Buyers that want it can purchase and the main GPU doesnt need silicon wasted on it. Its pretty, but getting turned off for performance on my system. If Dice were to implement multi-gpu in DX12 I would give it another go. No way am I dropping resolution for it.
The only reason RTX is possible at all is because it has hardware dedicated to doing RTX ops and that's only some small portion of the chip, like 1/3rd or so. This idea might work in a world where they create a 2nd chip and most of its transistors are used for RTX ops, and you combine 2 different cards, regular rendering card + RT card, but not in any kind of regular SLI 2 same cards kinda capacity. Funny really because if they did that it'd be like graphics has come full circle, in the good old days people used to have 2D video cards for rendering the desktop and 3D accelerators were secondary cards and you have this cute little 5" long VGA cable out the back which connected the output of one to the input of another. It'd be funny to see that come back.
Over-exaggeration. If it takes you 250ms to respond to a visual stimulu, 8ms (60hz to 120hz) isn't going to make that much a difference in the grand scheme of things.
What I'm saying is that's not an accurate representation of what is happening. It's too simplified to say that you see someone and you make 1 discreet aiming movement based on that which happens X many ms later. As I said it's a constant loop of perception of what's on the screen, your own mental processing of that perception, movement from your hands, input back into the game space and then update on the screen, completing the loop. That doesn't happen a single time when you aim it happens continuously at a rapid rate and so in any 250ms window the increased latency of that loop will be a detriment.
It's less important for the kinda CSGO awpers who just make fast predictive snap aims, it's way more important for those people who have slower and more deliberate aims.