• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Ray Tracing for everybody!

Everybody is an exaggeration. DXR is one part of the puzzle. GPU makers still have to supply driver that support it, and then you need a game/demo that actually uses it.

Then when all of that is in place, you can run your game/demo at 4fps if you don't have actual RT HW.
 
Everybody is an exaggeration. DXR is one part of the puzzle. GPU makers still have to supply driver that support it, and then you need a game/demo that actually uses it.

Then when all of that is in place, you can run your game/demo at 4fps if you don't have actual RT HW.

Even if a person was to just get 4fps running the demos at least they could see what their potentially missing in the end. I'm doubting it'll be enough to get end users to pony up for a card that's capable. It will be interesting to see the reviews showing the RT hit and how much visually the differences are.
 
Probably the optimization to be able to do it. That is why the new Nvidia chips have something that is made specifically to run RT.

What im trying to point out here is that everyone talks about specifically RayTracing Hardware but nobody really understand what this hardware is exactly.

Its nothing more than the ability to execute FP16.

And something else about Microsoft and Machine Learning (Win ML and Direct ML)., this is from Siggraph 2018.

http://on-demand.gputechconf.com/si...-gpu-inferencing-directml-and-directx-12.html

YM27wWF.jpg


TEsQ1qh.jpg


CiaJ0cT.jpg


yT5txJq.jpg
 
Last edited:
What im trying to point out here is that everyone talks about specifically RayTracing Hardware but nobody really understand what this hardware is exactly.

Its nothing more than the ability to execute FP16.

And something else about Microsoft and Machine Learning (Win ML and Direct ML)., this is from Siggraph 2018.

http://on-demand.gputechconf.com/si...-gpu-inferencing-directml-and-directx-12.html

YM27wWF.jpg


TEsQ1qh.jpg


CiaJ0cT.jpg

Then is "RT HW = FP16 enabled HW"? That would seem to be the answer to your question of what is RT HW no?
 
What's this gonna be, 720p30 run on shaders if you don't have RT cores?

As far as we can tell (without more information) there is nothing really special about the tensor cores in RTX cards. Just another name for compute only cores. Although if nVidia gives up more information, this may turn out to be not the case.

But even if this does make use of those, even a 2080Ti is most likely looking at sub 30fps at lower resolutions. Even their demos which lacked in scene detail were 30fps at 1080P.
 
There's a lot of entirely dedicated hardware in there for RTS along with the tensor cores. Then there's the neural network code needed to run the denoising on the tensor cores (definitely not trivial) and tying it all altogether coherently.

Always much easier to copy, but it definitely isn't a trivial thing.
 
Nobody but nVidia understands the specific details on how the hardware works, but it was pretty clear that it does the BVH search. Seems like a pretty straightforward conclusion to say they've reduced parts of that algorithm to fixed hardware so that it would run at the speed of fixed function hardware. To say its just regular FP16 resources is pretty clearly wrong and FUD. If it was just that, then it would be available as FP16 resources like the FP16 resources already on the chip...
 
As far as we can tell (without more information) there is nothing really special about the tensor cores in RTX cards. Just another name for compute only cores. Although if nVidia gives up more information, this may turn out to be not the case.

But even if this does make use of those, even a 2080Ti is most likely looking at sub 30fps at lower resolutions. Even their demos which lacked in scene detail were 30fps at 1080P.

Tensor cores are not just another name for compute cores. The are highly specialized Machine Learning units. They have many uses but will provide the denoising needed for quick ray tracing. You could possible do without these, but they probably give a small overall boost to denoising/improve its quality.
https://cloud.google.com/blog/produ...k-at-googles-first-tensor-processing-unit-tpu

OTOH, RT cores are totally dedicated to Ray Tracing. They only exist to calculate ray and triangle intersections. This is the biggest speedup, and the most critical part that enables RT to function in games. IMO without this dedicated RT element, it really isn't feasible to do RT in games.
 
OTOH, RT cores are totally dedicated to Ray Tracing. They only exist to calculate ray and triangle intersections. This is the biggest speedup, and the most critical part that enables RT to function in games. IMO without this dedicated RT element, it really isn't feasible to do RT in games.

Without knowing how the 20xx series performs with RT on using the lower tier cards it might be safe to say RT isn't even feasible anyways. If the 20xx series in the mortal mans price range doesn't even have RT or the grunt if so then the adoption rate will be even lower. Playing at 720p with RT on doesn't sound like what the pc gamer is striving for.
 
Since we are apparently only talking about a hybrid of rasterization and ray tracing, with only a small amount of ray tracing actually going on, perhaps frame rates will be faster than expected when everything gets going properly?
 
Back
Top