News Crytek CryEngine shows raytracing technology demo (runs on Radeon RX Vega 56)

Hitman928

Diamond Member
Apr 15, 2012
6,642
12,245
136
https://www.guru3d.com/news-story/v...hnology-demo-(runs-on-radeon-rx-vega-56).html

"Neon Noir was developed on a bespoke version of CRYENGINE 5.5., and the experimental ray tracing feature based on CRYENGINE’s Total Illumination used to create the demo is both API and hardware agnostic, enabling ray tracing to run on most mainstream, contemporary AMD and NVIDIA GPUs. However, the future integration of this new CRYENGINE technology will be optimized to benefit from performance enhancements delivered by the latest generation of graphics cards and supported APIs like Vulkan and DX12."
 

Muhammed

Senior member
Jul 8, 2009
453
199
116
They will be using RT cores through DX12/DXR in the next version of the tech, so expect RTX cards to have a serious edge over non RTX ones.
 
  • Like
Reactions: maddogmcgee
Mar 11, 2004
23,444
5,849
146
They will be using RT cores through DX12/DXR in the next version of the tech, so expect RTX cards to have a serious edge over non RTX ones.

We'll see on both accounts. This seems to deliberately be an alternative to DXR (I wouldn't take them mentioning future support with DX12/Vulkan as implicit DXR).
 
  • Like
Reactions: maddogmcgee

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
They will be using RT cores through DX12/DXR in the next version of the tech, so expect RTX cards to have a serious edge over non RTX ones.

There is zero proof that RT cores are any faster than compute cores. nVidia won't even say what an RT core is. For all we know, its a fancy name for compute cores that can only be used for RT. And Crytek has zero reason to go through and change their tech from being open to proprietary.
 
  • Like
Reactions: psolord

Muhammed

Senior member
Jul 8, 2009
453
199
116
There is zero proof that RT cores are any faster than compute cores. nVidia won't even say what an RT core is. For all we know, its a fancy name for compute cores that can only be used for RT. And Crytek has zero reason to go through and change their tech from being open to proprietary.
Compare RTX 2080Ti results to Titan V in Port Royale benchmark, the 2080Ti is 3 times faster than Titan V. A 2060 is faster than Titan V there. That's your ultimate proof. Fixed Function units is always faster than generic cores.
 

maddie

Diamond Member
Jul 18, 2010
5,147
5,523
136
Even if this would be so,it's still extra units that will run the ray tracing WITHOUT impacting normal gaming/FPS.
Very shallow analysis.

Just as Nvidia have prevented the options of DLSS at lower resolutions because the specialized hardware is too slow at high framerates, it is a possibility that the RT cores will be the limiting factor for the completion of a frame. Other hardware waiting for the RT computations to finish before the work on another frame begins, means that it very much will impact framerate.

The idea that specialized hardware always speeds up work is too simplistic. A proper analysis takes into account the entire workflow and sees if there are any bottlenecks. I have maintained that its possible that general purpose hardware might have the advantage as you can fit more vs sacrificing some space for specialized hardware. Extra units as you so easily conjure, don't come for free and are always at the cost of something else in its place. Details of the actual work are critical.

You have in the past invoked Amdahl's law to advocate IPC in CPUs as the most valuable property. Think of this as a distant cousin of said law.
 

mopardude87

Diamond Member
Oct 22, 2018
3,348
1,576
96
Some people still not realizing that RT can run on just about DX12.It's just that anything with RT cores will run tremendously faster.Source video appears to be at 4k 30fps on that Vega 56.

I would kill for a downloadable playable demo cause that looks absolutely amazing.Shame that the Crysis series appears to be over cause another one with RT on their newest engine would be mindblowing but prob completely unplayable on anything outside of a RTX 2080 ti as well :p
 

PhonakV30

Senior member
Oct 26, 2009
987
378
136
Well , there is problem : RT Cores bottleneck.I heard that you need high GPU utilization to remove bottleneck.so for RTX , They have to find a way to bypass.
 

TheELF

Diamond Member
Dec 22, 2012
4,027
753
126
Very shallow analysis.

Just as Nvidia have prevented the options of DLSS at lower resolutions because the specialized hardware is too slow at high framerates, it is a possibility that the RT cores will be the limiting factor for the completion of a frame. Other hardware waiting for the RT computations to finish before the work on another frame begins, means that it very much will impact framerate.

The idea that specialized hardware always speeds up work is too simplistic. A proper analysis takes into account the entire workflow and sees if there are any bottlenecks. I have maintained that its possible that general purpose hardware might have the advantage as you can fit more vs sacrificing some space for specialized hardware. Extra units as you so easily conjure, don't come for free and are always at the cost of something else in its place. Details of the actual work are critical.

You have in the past invoked Amdahl's law to advocate IPC in CPUs as the most valuable property. Think of this as a distant cousin of said law.
Huh?You are talking about something completely different here.
Sure devs could pull the same bull they did with tessellation and make RTX run slower then molasses but even then having more units will still be better then having less units.
Maybe I should have said you would be able to run -optimized for your card amount of ray tracing- without loosing FPS.
Nobody said that they come for free or anything else.
 
  • Like
Reactions: Muhammed

Ajay

Lifer
Jan 8, 2001
16,094
8,112
136
I’ve read this version of cryengine uses cone rendering. DXR doesn’t support cone tracing AFAIK. If CryTek is using BVHs then RTX's hardware/driver accelerators for BVH traversal etc. would be helpful; otherwise the RT cores are useless.
 

maddie

Diamond Member
Jul 18, 2010
5,147
5,523
136
Huh?You are talking about something completely different here.
Sure devs could pull the same bull they did with tessellation and make RTX run slower then molasses but even then having more units will still be better then having less units.
Maybe I should have said you would be able to run -optimized for your card amount of ray tracing- without loosing FPS.
Nobody said that they come for free or anything else.
Can you give me a breakdown of an RT core and the actual computation used? I think this is needed before we just use promotional information to make judgements. Until that time, stating for a fact that general compute pipelines will be slower is faith masquerading as fact. One thing to keep in mind, you will have a lot more general purpose pipeline/cores that specialized RT units. The benefit is that when not doing RT ops, they can be used for other computations.

This demo by Crytek appears to show that the main visual benefits of RTX is achievable without the specialized RT cores in the RTX series and on a present pretty mid range GPU. I see no reason to prevent some additional math operators to be added to the present main graphic ISAs, giving additional gains. Next gen RX470 class will probably be within this performance range and close also is the 1660Ti.
 

sandorski

No Lifer
Oct 10, 1999
70,677
6,250
126
This Demo along with recent rumours about future Consoles having Raytracing is very interesting. It seems like this feature will be just standard fare within a few years.
 

jpiniero

Lifer
Oct 1, 2010
16,490
6,983
136
So when do you think AMD will enable the DXR fallback in their driver so we can see what the perf is like in Port Royal?
 

Ajay

Lifer
Jan 8, 2001
16,094
8,112
136
So when do you think AMD will enable the DXR fallback in their driver so we can see what the perf is like in Port Royal?
Some random dude on the internet thinks it’s enabled, it I couldn’t find anything solid out there. AMD wouldn’t want anyone releasing benchmarks on the fallback since performance won’t be anywhere near optimized.

https://www.pc-better.com/dxr-on-radeon-vii/
 
  • Like
Reactions: maddogmcgee

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Tech demos are shown all the time as proof of concept, but often do not carry the same performance when an actual game is built on top of it. The new Cryengine looks neat, but it's just a tech demo. No AI, no NPC's, no explosions, no synchronized sound, no internet data packets to parse and input, no other special effects happening. Throw all that into the mix, and performance is going to drop. How much? That is to be seen.

No one has said RT can't be done on legacy hardware. All Nvidia did was create fixed function cores to speed it up and, by doing so, brought about the conversation of RT to the table a few years ahead of when it was thought to happen.
 
  • Like
Reactions: Muhammed and DooKey
Mar 11, 2004
23,444
5,849
146
Tech demos are shown all the time as proof of concept, but often do not carry the same performance when an actual game is built on top of it. The new Cryengine looks neat, but it's just a tech demo. No AI, no NPC's, no explosions, no synchronized sound, no internet data packets to parse and input, no other special effects happening. Throw all that into the mix, and performance is going to drop. How much? That is to be seen.

No one has said RT can't be done on legacy hardware. All Nvidia did was create fixed function cores to speed it up and, by doing so, brought about the conversation of RT to the table a few years ahead of when it was thought to happen.

Since we still don't know what RT cores even are, we can't say they're "fixed function" (I still have a hunch they were actually implemented for some HPC customers, and Nvidia is touting them for ray tracing, like how they did with Tensor cores, where they're trying to tout them for other uses when they were actually implemented for their biggest customers), and because of that, I don't know that we can even say they're accelerating anything. Furthermore they don't even seem to be that great since they're doing a limited form of ray tracing and yet the performance is still so poor that they're having to scale it back further (and go back to more simple raster tricks that seemingly are difficult to discern the difference between) and resort to other methods (rendering at lower resolution and upscaling) to try and make it feasible. I have a hunch that "general purpose compute" units will get us there just as quickly, and that we're effectively just waiting for the proper math features to be enabled (which is what Tensor Units are and how they're being implemented - they're in the GPU pipeline on Nvidia and AMD is implementing the same features in that way as well, not separate fixed function units - and Nvidia's own block diagram of Turing shows RT cores are in there as well; now that doesn't mean they couldn't be specialized or expanded, but it does show that they're tied to the design of the rest of the GPU in some way and I wouldn't think Nvidia would want hardware that is only just for some small aspect of ray tracing to potentially affect the rest of the GPU if that's all it could be used for).

I don't know if they even sped anything up. Microsoft seems to have been the one that was actually pushing ray-tracing, possibly to try and get companies to start adopting DX12 since few really had and seems like most had not put much work into making good use of it. And other companies have been working on stuff like this for a long time. I think the mix of CPU core counts and compute capability on GPUs have just gotten to the point where its feasible for them to now start implementing it some, assuming that both aspects will continue to grow. I'm honestly concerned in that I don't know how much further there will be the ability to expand on that without massive chips since transistor improvements are slowing down and we're getting close to not being able to push much further without major overhauls (that will be expensive and most products likely won't be able to afford those costs, so economies of scale will be harder to reap). So I don't know that we'll actually see ray-tracing properly take off outside of the HPC space (so game streaming services) as I think there's going to be too many issues with getting the hardware capability for proper real time ray tracing into consumer's hands.
 

DXDiag

Member
Nov 12, 2017
165
121
116
we can't say they're "fixed function" (I still have a hunch they were actually implemented for some HPC customers
They are fixed function in a sense they accelerate a specific type of math (BVH intersection), same as Texture Mapping Units, Raster Units, and ROPs. So they are fixed function by definition.

I have a hunch that "general purpose compute" units will get us there just as quickly
That's incorrect. A Titan V is below a RTX 2060 in Port Royal Benchmark. And a Titan V is no slouch when it comes to compute and rasterization (it's slightly below 2080Ti performance).

they're doing a limited form of ray tracing and yet the performance is still so poor
Ray Tracing in itself is the most heavy math out there, the fact that these chips are running it at decent fps (1440p60) at all is a great achievement, even Raja and Intel thinks the same way. No current general purpose hardware is enough to do that currently, because the general purpose hardware will be busy doing raster work + tracing. RT cores mitigate some off the burden of the general purpose hardware. There is no way around this fact: a combination of fixed function + general purpose is always going to be faster than just general purpose.

Now on to Crytek's demo, it's nothing special, I can show you several examples of RT demos running on a 750Ti, or a 970 with great fps, but they are still highly specific, constrained and hand tuned examples, and not applicable in a game or even an API form. The fact that Crytek hid the fps of the demo is another testament that even they are not confident to show these results.
 
  • Like
Reactions: Timmah!

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
Tech demos are shown all the time as proof of concept, but often do not carry the same performance when an actual game is built on top of it. The new Cryengine looks neat, but it's just a tech demo. No AI, no NPC's, no explosions, no synchronized sound, no internet data packets to parse and input, no other special effects happening. Throw all that into the mix, and performance is going to drop. How much? That is to be seen.

No one has said RT can't be done on legacy hardware. All Nvidia did was create fixed function cores to speed it up and, by doing so, brought about the conversation of RT to the table a few years ahead of when it was thought to happen.

Almost all of those things you note are all handled by the CPU, not the GPU. Having other characters and such on screen can impact performance. But game engine/networking has zero to do with the GPU.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Almost all of those things you note are all handled by the CPU, not the GPU. Having other characters and such on screen can impact performance. But game engine/networking has zero to do with the GPU.

Explosions, bokeh, filtering, occlusion, etc. all graphics resources. AI, scripting, sound, networking, etc. may have little to do with graphics but all still impact performance. Either way, you completely missed the entire point of what I was saying. Tech demos are designed to look and run awesome. It's a very controlled scenario putting a best foot forward to highly small aspects of a larger picture. Just because a tech demo can run 30fps on a Vega 56 does not mean Vega 56 or a 1080 TI or whatever will run a full-featured game based on the same engine.

Knee jerk reactions to a video of an unreleased tech demo not yet running a game, or even having an announced a game, on the market is exactly that. Knee jerk.
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
The easiest way to see some of the arguments above are nonsense is to consider how much R&D and die space on these chips NV are devoting to the RT cores. They would not be doing that if they didn't do something really quite valuable.
(Whether it has made for better products overall is another argument again of course.).