Ray Tracing for everybody!

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,249
136
Since we are apparently only talking about a hybrid of rasterization and ray tracing, with only a small amount of ray tracing actually going on, perhaps frame rates will be faster than expected when everything gets going properly?

It's possible I guess. If it was easy to get the RT eye candy while providing respectable frame rates I'd imagine they would have dropped the games updates at launch. The longer it takes the harder it looks to implement a careful balance as far as I see at least.
 

Triloby

Senior member
Mar 18, 2016
587
275
136
720p/30 will be for the 2080 and below. The 2080ti will be full 1080p. Dunno if above 30fps tho.

Almost seems kinda pointless to spend that much on an RTX card if you have a high-refresh rate 1440p monitor, or any 4K display in general; only to resort to 720p or 1080p just to have semi-raytracing effects at an acceptable frame rate....

Then again, this is all early days for implementing ray tracing in video games.
 
  • Like
Reactions: ZGR

gorobei

Diamond Member
Jan 7, 2007
3,668
997
136
adoredtv did a fairly deep dive into raytracing in general: https://www.youtube.com/watch?v=SrF4k6wJ-do [general status of the art starts around 12min mark.]
simply put, the number of samples needed for a full path traced render requires a new gpu architecture that abandons raster optimization. hybrid rt+raster is too expensive for half measures and still holding on to legacy performance numbers.

anyone expecting rt to revolutionize/radically simplify game lighting is not familiar with the lighting pipeline. we had luminosity based lighting in maya/lightwave/soft/etc for feature films almost 15 years ago. lighting td's didnt go away because movie/tv level lighting is a plethora of key+fill+rim+dynamic lights (often in the dozens of lights per character range) along with assorted minor lights. the falloff effect of real life lighting means you are always going to have to sort and individually exclude lights per character/object for best efficiency.
there was a siggraph presentation back in 2007 on TF2's custom shader for the softer pixar like lighting esthetic. you cant get that with a pure reality based rt setup. games will always have some sort of non-motivated lighting cheat to draw your attention to a specific direction.

whatever brand of the 1st gen dxr hardware you get is never going to be able to do anything really impressive at the framerates everyone is used to.
 
  • Like
Reactions: dlerious

Thala

Golden Member
Nov 12, 2014
1,355
653
136
adoredtv did a fairly deep dive into raytracing in general: https://www.youtube.com/watch?v=SrF4k6wJ-do [general status of the art starts around 12min mark.]

whatever brand of the 1st gen dxr hardware you get is never going to be able to do anything really impressive at the framerates everyone is used to.

Having accurate reflections and refractions along with proper shadows is already very impressive in my book compared to all the fake techniques used today like screen-space reflections or cube-maps.
 
  • Like
Reactions: ryan20fun

gorobei

Diamond Member
Jan 7, 2007
3,668
997
136
Having accurate reflections and refractions along with proper shadows is already very impressive in my book compared to all the fake techniques used today like screen-space reflections or cube-maps.
reflections/refractions at ~20 fps might be impressive, but is unplayable. note my point about framerates.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
reflections/refractions at ~20 fps might be impressive, but is unplayable. note my point about framerates.
It's hard to believe NV's hybrid method will be unplayable, given the presentation and the buildup.
If it only works at unplayable frame rates, then NV is in big trouble, imo.
 

PrincessFrosty

Platinum Member
Feb 13, 2008
2,301
68
91
www.frostyhacks.blogspot.com
Yes, I mean what makes a Hardware to be RT ???

This is in reference to specific parts of the GPU hardware being developed to be specialized at ray tracing operations. WIth hardware like CPUs and GPUs you can either developer what is known as GP (General purpose) hardware, which can do any kind of calculation but is slow at most of them, because you have all these layers of abstraction to map any generic calcualtion onto the general purpose hardware, which makes render times for ray tracing absolutely diabolical. Or you can invent hardware which is optimized for doing ray tracing work only, it's faster because you need less abstraction but that hardware isn't typically good for doing much else.

I think the point being made here is that you'll be able to run ray tracing on GP hardware with DX12 support, but it will likely be so slow it wont be playable unless you have a GPU which dedicates some specialist hardware towards ray tracing, such as the new Nvidia RTX cards. Even when you dedicate large amounts of die space to RT it's still quite slow, hybrid rendering struggles at 1080p 60fps.
 
  • Like
Reactions: Headfoot

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
It's hard to believe NV's hybrid method will be unplayable, given the presentation and the buildup.
If it only works at unplayable frame rates, then NV is in big trouble, imo.

No one knows how it will perform yet so all the "points" about frame rates have pretty well nothing back them up, and I suspect come more from a wish to play down Nvidia then any actual unbiased evidence.

The argument for ray tracing working is that Nvidia aren't stupid enough to spend that many transistors and feel like charging that much per card if the cards headline feature is useless.
 
Last edited:

mv2devnull

Golden Member
Apr 13, 2010
1,498
144
106
The argument for ray tracing working is that Nvidia aren't stupid enough to spend that many transistors and feel like charging that much per card if the cards headline feature is useless.
Grab the money and run. In the old times one could sell snake oil at premium price and move on to the next town before anyone noticed. NVidia cannot hide like that, can it?
 

Despoiler

Golden Member
Nov 10, 2007
1,966
770
136
No one knows how it will perform yet so all the "points" about frame rates have pretty well nothing back them up, and I suspect come more from a wish to play down Nvidia then any actual unbiased evidence.

The argument for ray tracing working is that Nvidia aren't stupid enough to spend that many transistors and feel like charging that much per card if the cards headline feature is useless.

Nvidia wasted transistors on being able to process massive amounts, but visually imperceptible levels of tessellation. All for benchmarks. As long as the benchmarks show it's better and the tech press has little interest in analysis of necessity, Nvidia will sell cards. Reality is malleable. It's why marketing is a thing.

I've said before this chip is not primarily for gaming. This is a chip meant to sell Quadro cards for devs to do offline ray tracing and have access to tensor cores. Everything being presented for the gaming use case has been more of a force fit. Does it work? Yes. Is is optimal? No. Realtime raytracing is not even remotely as performant as current raster techniques. It's not as clean of a case as compared to the professional workload case which delivers what people need/want and being completely better than the last generation. That's how we know what they built the chip for.
 
  • Like
Reactions: Feld

Malogeek

Golden Member
Mar 5, 2017
1,390
778
136
yaktribe.org
Nvidia wasted transistors on being able to process massive amounts, but visually imperceptible levels of tessellation.
There was no dedicated hardware for tessellation. It's just that their design for their shader cores is extremely efficient at processing geometry. Just like GCN shader cores are very good at compute tasks.

Their RT cores however as we know are very specific to accelerating raytracing and so far considered a black box. It's unknown AFAIK whether they'll be able to be utilized by programmers other than their specific intended task.
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
Grab the money and run. In the old times one could sell snake oil at premium price and move on to the next town before anyone noticed. NVidia cannot hide like that, can it?
Obviously not, they want to keep selling to the same people so can't sell snake oil. It's also something you'd only do if you were desperate and needed cash fast, which Nvidia clearly aren't.

Nvidia wasted transistors on being able to process massive amounts, but visually imperceptible levels of tessellation. All for benchmarks. As long as the benchmarks show it's better and the tech press has little interest in analysis of necessity, Nvidia will sell cards. Reality is malleable. It's why marketing is a thing.

I've said before this chip is not primarily for gaming. This is a chip meant to sell Quadro cards for devs to do offline ray tracing and have access to tensor cores. Everything being presented for the gaming use case has been more of a force fit. Does it work? Yes. Is is optimal? No. Realtime raytracing is not even remotely as performant as current raster techniques. It's not as clean of a case as compared to the professional workload case which delivers what people need/want and being completely better than the last generation. That's how we know what they built the chip for.
As you've already said they already rasterize to the extreme with their imperceptible levels of tessellation so they need something new to sell. People want better lighting more then they want even more tessellation, and ray tracing has always been the holy grail of lighting. Outside of making movies it's gamers that want the most realistic visuals. I suspect a lot of quadro users are engineering customers who work with bubble gum colors and no textures. Hence I think ray tracing is very much for gamers - it's Nvidia's way of getting us to buy new cards, and they obviously believe we will think it's amazing or they wouldn't be charging such silly money for those cards.
 
Last edited:

Despoiler

Golden Member
Nov 10, 2007
1,966
770
136
There was no dedicated hardware for tessellation. It's just that their design for their shader cores is extremely efficient at processing geometry. Just like GCN shader cores are very good at compute tasks.

Their RT cores however as we know are very specific to accelerating raytracing and so far considered a black box. It's unknown AFAIK whether they'll be able to be utilized by programmers other than their specific intended task.

I don't think that is completely right. Tessellators are dedicated hardware units.
Tessellation is a GPU-bound item. Modern architectures – Fermi, Kepler, and Maxwell included – include dedicated tessellation units that allow for independent processing of tessellated objects.
https://www.gamersnexus.net/guides/1936-what-is-tessellation-game-graphics


As you've already said they already rasterize to the extreme with their imperceptible levels of tessellation so they need something new to sell. People want better lighting more then they want even more tessellation, and ray tracing has always been the holy grail of lighting. Outside of making movies it's gamers that want the most realistic visuals. I suspect a lot of quadro users are engineering customers who work with bubble gum colors and no textures. Hence I think ray tracing is very much for gamers - it's Nvidia's way of getting us to buy new cards, and they obviously believe we will think it's amazing or they wouldn't be charging such silly money for those cards.

I don't think you realize how much ray tracing is done outside of gaming and movies. Everything from commercials to product previews to marketing materials are using ray tracing to mock up the products often times within ray traced scenes. Nvidia is also making a huge push into that movie space you mentioned. Right now it's all massive server farms of CPUs. The challenge is that the tools for CPU raytracing are very mature. The systems are very tuned. No one wants to throw the investment away and training people on different tools. Just look no further than AMD's battle against CUDA on how easy it is. Nvidia needs to show that the tools are there and the performance is better. It looks to me like they are starting with training the professionals on the Nvidia workflow.
 
  • Like
Reactions: prtskg and Malogeek

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
I don't think you realize how much ray tracing is done outside of gaming and movies. Everything from commercials to product previews to marketing materials are using ray tracing to mock up the products often times within ray traced scenes. Nvidia is also making a huge push into that movie space you mentioned. Right now it's all massive server farms of CPUs. The challenge is that the tools for CPU raytracing are very mature. The systems are very tuned. No one wants to throw the investment away and training people on different tools. Just look no further than AMD's battle against CUDA on how easy it is. Nvidia needs to show that the tools are there and the performance is better. It looks to me like they are starting with training the professionals on the Nvidia workflow.
I do know a fair number of quadro users and tbh they don't need fancy ray tracing. I do agree there is a big market for movies and commercials. I suspect for that Nvidia just needs to put the hardware out there with api's to control it and lots of free support then the software will be updated to use it. Same as CUDA slowly took over. I agree that Nvidia doing it now means they will corner the market almost certainly with propriety libraries before anyone else has chance to do anything. However that still won't need to be real time, just quicker.

This release is all about real time ray tracing. Those tensor cores aren't needed for movie rendering, they are there to de-noise which is only required if you don't fire out enough rays to get a near perfect result which is what you'd do for a movie. In addition a render farm wouldn't need all that tessellation power so for them all that die space is wasted. No the ray tracing + tensor core combo is definitely for real time ray tracing - where fps is key and compromised quality is acceptable, exactly what you want for games.
 
Last edited:

SirDinadan

Member
Jul 11, 2016
108
64
71
boostclock.com
Those tensor cores aren't needed for movie rendering, they are there to de-noise which is only required if you don't fire out enough rays to get a near perfect result which is what you'd do for a movie.
Denoising is one of the most researched topic in rendering right now, and even full-blown, high-quality, production renderers will deploy denoising. Everyone loves faster render times, so why not use it?
KPCN / DeepZ Denoising / MLDenoising
 

sandorski

No Lifer
Oct 10, 1999
70,101
5,640
126
30fps @1080p at these prices does not make RT for everybody. For the price of a 2080ti, one can make a full system with 60+fps @1080 with no RT. Give it a couple refreshes/generations. This assumes Gaming as the primary usage.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
30fps @1080p at these prices does not make RT for everybody. For the price of a 2080ti, one can make a full system with 60+fps @1080 with no RT. Give it a couple refreshes/generations. This assumes Gaming as the primary usage.
Well, we still don't know what the frame rates are likely to be if/when NV's hybrid raster/RT rendering is fully supported in games.
 

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
Well, we still don't know what the frame rates are likely to be if/when NV's hybrid raster/RT rendering is fully supported in games.

First of all, if Im not mistaken DXR is hybrid raster/RT and can work with every DX-12 hardware from AMD, Intel and NVIDIA.
Secondly, from what DISE was talking in their BF 5 demo, they are targeting 60fps at 1080p with the RTX1080Ti. That means they will use less RT (less reflections) in the game to reach the 60fps target.
 

zlatan

Senior member
Mar 15, 2011
580
291
136
First of all, if Im not mistaken DXR is hybrid raster/RT and can work with every DX-12 hardware from AMD, Intel and NVIDIA.

Not every. The fallback layer needs shader model 6 or better. There are DX12 hardwares on the market that don't support this, like Fermi.

If the driver support DXIL, then DXR just works on the fallback layer. No additional support required, other than the DXR shaders, but that's just a little upgrade for the compiler.
The native support require a "real" driver. NVIDIA will provide it for Volta and Turing, while AMD will talk about it later.

There are a lot of hardware might get native support later. In theory there are pipeline stages that can be accelerated on Pascal and Maxwell, or all of the GCN Radeons. So, if an IHV don't support these, than it will be business decision, and not technical.
 

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
Not every. The fallback layer needs shader model 6 or better. There are DX12 hardwares on the market that don't support this, like Fermi.

If the driver support DXIL, then DXR just works on the fallback layer. No additional support required, other than the DXR shaders, but that's just a little upgrade for the compiler.
The native support require a "real" driver. NVIDIA will provide it for Volta and Turing, while AMD will talk about it later.

There are a lot of hardware might get native support later. In theory there are pipeline stages that can be accelerated on Pascal and Maxwell, or all of the GCN Radeons. So, if an IHV don't support these, than it will be business decision, and not technical.

Yes thank you, I wasnt considering Fermi as a DX-12 card but from Kepler onward.

And as you have said, Kepler and first GCN cards may be compatible but I dont believe either AMD or NVIDIA will want to support those with DXR as they will probable not be able to run RT games at reasonable fps even at 1080p.

On the other had, there is a chance that VEGA 10 will perform quite well in DXR, since it has support for Axis Aligned Rectangular Primitives (DX-12.1) that as far as I understand are used for BVH in RT (if im wrong about this please anyone correct me).
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
Not every. The fallback layer needs shader model 6 or better. There are DX12 hardwares on the market that don't support this, like Fermi.

If the driver support DXIL, then DXR just works on the fallback layer. No additional support required, other than the DXR shaders, but that's just a little upgrade for the compiler.
The native support require a "real" driver. NVIDIA will provide it for Volta and Turing, while AMD will talk about it later.

There are a lot of hardware might get native support later. In theory there are pipeline stages that can be accelerated on Pascal and Maxwell, or all of the GCN Radeons. So, if an IHV don't support these, than it will be business decision, and not technical.

A couple of DXR benches, that should work on anything with proper DXR drivers, which is only NVidia cards so far.
https://www.reddit.com/r/nvidia/comments/9lcs4u/microsoft_dxr_demos_compiled_for_windows_10/
https://www.youtube.com/watch?time_continue=94&v=GycebOhBEds
 
Last edited:

Timmah!

Golden Member
Jul 24, 2010
1,418
630
136
Nvidia wasted transistors on being able to process massive amounts, but visually imperceptible levels of tessellation. All for benchmarks. As long as the benchmarks show it's better and the tech press has little interest in analysis of necessity, Nvidia will sell cards. Reality is malleable. It's why marketing is a thing.

I've said before this chip is not primarily for gaming. This is a chip meant to sell Quadro cards for devs to do offline ray tracing and have access to tensor cores. Everything being presented for the gaming use case has been more of a force fit. Does it work? Yes. Is is optimal? No. Realtime raytracing is not even remotely as performant as current raster techniques. It's not as clean of a case as compared to the professional workload case which delivers what people need/want and being completely better than the last generation. That's how we know what they built the chip for.

I sort of agree with this. But i dont really see anything bad about this approach - granted, i am biased, cause i have use for it and wanted this for years. Anyway, its IMO similar to Intel selling rebadged Xeon CPUs as Skylake-X and nobody really blasts them for doing so, even though consumers dont really need 18 cores or AVX-512 for gaming and whatnot.
At the same time, even though i agree its being force fit to games, i am pretty sure its going to work just fine, all these rumors of 2080ti not being able to deliver even 60FPS at 1080p are premature, based on unoptimized demoes from Gamescom. They will surely find a way to make it work properly. And if we talk aboit raytracing in games being not optimal - i dont think thats gonna be down to performance issues. It will be more like this hybrid approach not offering enough visible difference to most people, when compared to 100 percent rasterized image. For that you would need 100 percent ray-traced picture, which is obviously no-go in real-time yet and will be for quite some time.



I do know a fair number of quadro users and tbh they don't need fancy ray tracing. I do agree there is a big market for movies and commercials. I suspect for that Nvidia just needs to put the hardware out there with api's to control it and lots of free support then the software will be updated to use it. Same as CUDA slowly took over. I agree that Nvidia doing it now means they will corner the market almost certainly with propriety libraries before anyone else has chance to do anything. However that still won't need to be real time, just quicker.

This release is all about real time ray tracing. Those tensor cores aren't needed for movie rendering, they are there to de-noise which is only required if you don't fire out enough rays to get a near perfect result which is what you'd do for a movie. In addition a render farm wouldn't need all that tessellation power so for them all that die space is wasted. No the ray tracing + tensor core combo is definitely for real time ray tracing - where fps is key and compromised quality is acceptable, exactly what you want for games.

They do most likely use those quadros just for engineering purposes, work with AutoCAD, Revit, SolidWorks viewports, which really is faster on Quadros thanks to their optimized drivers, but not actual final rendering. You dont even need Quadro for that, regular Geforce is as fast as Quadro at that particular task. The only advantage Quadro has is VRAM capacity, but you need that only for massive projects. BTW, you can certainly use Tensor cores even for off-line rendering, it is not just real-time rendering, which requires denoising.