Question DF: Fortnite's Unreal Engine 5 Upgrade Reviewed - ( UE 5 reduce the need for HW RT?)

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Heartbreaker

Diamond Member
Apr 3, 2006
4,228
5,228
136
I just watched Digital Foundry analysis of the Unreal Engine 5 upgrade to Fortnite.

It's Lumen Global Illumination Software mode looks as good as HW version most of the time - Very impressive.

If more games use UE5 then I think most people don't really need to be that concerned about HW RT, while still getting excellent Global Illumination:

 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
-I too think the reflection argument is grasping at straws.

The reflection "argument" wasn't really an argument. I just pointed out that Lumen cannot be considered actual ray tracing because it's too limited in its simulation.

And it's not just for reflections, but for other things as well.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,949
504
126
The guy at 20:05 has to pee so bad.

This looks good but I'll wait to see it in game on my own system. I've grown very jaded of the latest thing turning out to be a small step change.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
HW RT in EU5 is 50% slower than SW RT. And this is per Epic. They expect very few people to use the slower choice.

Not sure I understand why anybody would choose the HW RT hill to die on. Having a software implementation means it works the same on all cards. And it means we don't need fixed function hardware that has resulted in greatly slowing GPU performance over the last few years. As die space that would have been used for rasterization performance, had to be moved to RT fixed function hardware that is otherwise useless if RT isn't on.

WarGaming actually had a test build of World of Tanks last year that did full raytraced shadows on the CPU with very little performance hit compared to HW RT shadows. So it would work on any GPU. The thought being that many games only use 4 or so cores, and there is CPU to spare to handle other tasks with most gamers having 6-10 cores.
 
Last edited:

KompuKare

Golden Member
Jul 28, 2009
1,016
934
136
HW RT in EU5 is 50% slower than SW RT. And this is per Epic. They expect very few people to use the slower choice.

Not sure I understand why anybody would choose the HW RT hill to die on. Having a software implementation means it works the same on all cards. And it means we don't need fixed function hardware that has resulted in greatly slowing GPU performance over the last few years. As die space that would have been used for rasterization performance, had to be moved to RT fixed function hardware that is otherwise useless if RT isn't on.
Never mind RT fixed function, the genius of the green marketing machine was to convince consumers that they need tensor cores to upscale!

Handy as Nvidia then don't have to make pro versions of all their dies with tensor, but while I am sure those tensor cores do something, as FSR has demonstrated those cores don't add too much.

Guess tensor cores are sort of the opposite of fixed function though.

I am sure if a more hybrid RT approach finds favour, then a few fixed function units might be of benefit but those might equally be on the CPU as on the GPU. Some of the CPU benches of RT games does seem to suggest that either GPUs need more RT hardware, or some things will continue to be best done on the CPU.

I'm sure eventually there will be more RT, but I also suspect that after the likes Portal demonstrate that any game developer going for a dogmatic RT-everything approach vs someone willing to cheat to get better lighting is going to lose. Full RT only without any raster "cheating" is a very long way away and may never come to pass.

Strange how the big champions of RT is the future are also big champions of upscaling. And you don't get much more "cheating" than upscaling. Well, I guess the next big "cheat" is Fake Frames which marketeers want us to call Frame Generation!
 

Mopetar

Diamond Member
Jan 31, 2011
7,848
6,015
136
The reflection "argument" wasn't really an argument. I just pointed out that Lumen cannot be considered actual ray tracing because it's too limited in its simulation.

Who cares if it's real or not. I don't play games to have "actual" ray tracing. I play games because I want an immersive experience and choppy frame rates will do more to hurt that than slightly less realistic reflections.

Until we get 5090 or maybe even 6090 levels of performance at the mid range and entry-level tiers, actual ray tracing isn't going to be implemented in anything other than rereleases of decade old games. No company will make a game that 98% of the market won't be able to run.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
HW RT in EU5 is 50% slower than SW RT. And this is per Epic. They expect very few people to use the slower choice.

It won't be that way for very long, once it gets properly optimized.

Not sure I understand why anybody would choose the HW RT hill to die on. Having a software implementation means it works the same on all cards. And it means we don't need fixed function hardware that has resulted in greatly slowing GPU performance over the last few years. As die space that would have been used for rasterization performance, had to be moved to RT fixed function hardware that is otherwise useless if RT isn't on.

Because Lumen isn't really ray tracing, and ray tracing is the future of modern graphics. Lumen is just a stop gap solution. The goal is to eventually have full ray tracing.

WarGaming actually had a test build of World of Tanks last year that did full raytraced shadows on the CPU with very little performance hit compared to HW RT shadows. So it would work on any GPU. The thought being that many games only use 4 or so cores, and there is CPU to spare to handle other tasks with most gamers having 6-10 cores.

Do you have a link? I'd like to look into this.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Never mind RT fixed function, the genius of the green marketing machine was to convince consumers that they need tensor cores to upscale!

Handy as Nvidia then don't have to make pro versions of all their dies with tensor, but while I am sure those tensor cores do something, as FSR has demonstrated those cores don't add too much.

Last time I checked, DLSS was significantly better in terms of quality compared to FSR. In fact, DLSS has at times been considered better than native resolution in many games.

So I guess those tensor cores aren't as worthless as you'd like to think.

Strange how the big champions of RT is the future are also big champions of upscaling. And you don't get much more "cheating" than upscaling. Well, I guess the next big "cheat" is Fake Frames which marketeers want us to call Frame Generation!

The same people that lambasted DLSS are the same people that are now celebrating FSR. And the same people that lambasted DLSS 3, are the same people that are hoping that FSR 3.0 will have FG.

Tribalism is a curious thing indeed.
 
  • Haha
Reactions: Thunder 57

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Who cares if it's real or not. I don't play games to have "actual" ray tracing. I play games because I want an immersive experience and choppy frame rates will do more to hurt that than slightly less realistic reflections.

Just because a game has RT in it doesn't mean it's going to be choppy or have poor performance. I've played several RT enabled games and in some of them I'm at a locked 120 FPS at native 4K, ie Doom Eternal and Spiderman Remastered.

Much of the time, one of the main reasons for why a game's performance suffers so much when RT is enabled has more to do with the DX12 renderer implementation being crappy than with RT itself. Due to the reliance on the CPU for BVH building and maintenance, if the DX12 renderer is not sufficiently multithreaded, turning on RT will tank performance. Callisto Protocol is like that.

In any case, there's no reason for you and others to have such a hostile posture towards hardware accelerated RT. Lumen is close enough to ray tracing that most people probably won't even be able to tell the difference unless they scrutinize everything.

We can have our cake and eat it too.

Until we get 5090 or maybe even 6090 levels of performance at the mid range and entry-level tiers, actual ray tracing isn't going to be implemented in anything other than rereleases of decade old games. No company will make a game that 98% of the market won't be able to run.

We have to start somewhere. It's already improved by large margins since Turing came out 4 years ago.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
6,827
7,191
136
Did you miss where I said, "And it's not just for reflections, but for other things as well."

-Yeah but you keep saying Lumen isn't real Ray tracing based on some arbitrary criteria.

All simulated Ray tracing is a cheat of the real thing, no computer is going to cast the infinite number of rays any actual IRL light source does, it's just all cheating and approximations all the way down and for some reason you've decided Lumen is on the wrong side of that line.

If it looks like a ray traced effect and doesn't immediately grind hardware to a stop, it doesn't mean it's not a ray traced effect.
 

KompuKare

Golden Member
Jul 28, 2009
1,016
934
136
Last time I checked, DLSS was significantly better in terms of quality compared to FSR. In fact, DLSS has at times been considered better than native resolution in many games.

So I guess those tensor cores aren't as worthless as you'd like to think.



The same people that lambasted DLSS are the same people that are now celebrating FSR. And the same people that lambasted DLSS 3, are the same people that are hoping that FSR 3.0 will have FG.

Tribalism is a curious thing indeed.

Fake Frame are Fake Frames, you won't get me applauding them.

As for the better than native... All that seems to have happened is that people prefer sharper images and if the only alternative is anti-alias which reduced picture quality, then people prefer DLSS.

Won't get me praising FSR either - although recent comparisons have it far closer to DLSS than you give it credit.

I'm on 1440P and I can't see the point of running at 720P and upscaling to 1440P, might as well get a TV with a hardware upscaler, or a console which the PCMR used to laugh at for using upscalers. Or why not get a 4K screen and apply a blurring coating?

My point about tensor sensors is that it is mostly an economic thing: Nvidia are able to use the GA102 for a RTX 3080 and a RTX A6000 because the consumer chips also have tensor sensors, but is that silicon area being put to good use or is like a iGPU on a high-end CPU mostly an economic thing (i.e. Intel's 13900K, etc. sell far too few numbers to have their own die).

The only positive of FSR and Intel's XeSS is that it isn't proprietary.
 
  • Like
Reactions: Mopetar

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
-Yeah but you keep saying Lumen isn't real Ray tracing based on some arbitrary criteria.

How is it arbitrary? Ray tracing has pretty specific criteria and being able to reflect off screen geometry (without resorting to tricks) is one of them. If Lumen can't do off screen reflections, then it's not ray tracing. It's hybridized, which is perfectly fine.

If ray tracing was so easy to crack that it could fully run on the CPU, then I doubt Intel or AMD would have developed RT acceleration hardware.

All simulated Ray tracing is a cheat of the real thing, no computer is going to cast the infinite number of rays any actual IRL light source does, it's just all cheating and approximations all the way down and for some reason you've decided Lumen is on the wrong side of that line.

If it looks like a ray traced effect and doesn't immediately grind hardware to a stop, it doesn't mean it's not a ray traced effect.

I agree with you to an extent. Light simulation is obviously extremely compute intensive and we're likely never ever going to have physically accurate simulation that could be used for Scientific purposes.

However, we don't need to go that far. We just need to get the low hanging fruit. Off screen reflections is one of the hallmarks of ray tracing and is very attainable, so it should be one of the standards that determine whether or not ray tracing is being used.
 
Last edited:

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
Fake Frame are Fake Frames, you won't get me applauding them.
You know that everything in 3d renderering is fake - it's all built on short cuts, approximations, and subtle trickery. The primary aim is to fake the best result possible given the limited processing power available, not produce the most accurate one. Given that there's nothing wrong with interpolated frames if they make the result look better.

For games you should just look at the end result when you are playing the game, not the process to generate it, and if it looks good it looks good - you shouldn't care what jiggery pokery was used to get there.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Fake Frame are Fake Frames, you won't get me applauding them.

I say if it works, it works. I've tried FG so far in 2 games, Plague Requiem and Witcher 3 Next gen edition. Both times it looked completely normal and indistinguishable from normal rendering, though I wasn't really scrutinizing the image to be honest.

I turned off FG in Plague Requiem as it doesn't really need it, but I kept it on for Witcher 3 Next gen as it still has some CPU optimization issues holding back performance.

That's probably the biggest benefit of FG, in that it helps to bypass poor CPU performance.

As for the better than native... All that seems to have happened is that people prefer sharper images and if the only alternative is anti-alias which reduced picture quality, then people prefer DLSS.

Personally I think DLAA is amazing. It really cleans up an image while preserving the detail in native resolution. I'm using DLAA now in Baldur's Gate 3 early access.

My point about tensor sensors is that it is mostly an economic thing: Nvidia are able to use the GA102 for a RTX 3080 and a RTX A6000 because the consumer chips also have tensor sensors, but is that silicon area being put to good use or is like a iGPU on a high-end CPU mostly an economic thing (i.e. Intel's 13900K, etc. sell far too few numbers to have their own die).

The only positive of FSR and Intel's XeSS is that it isn't proprietary.

A.I and machine learning are big things in 3D graphics and general computing now. Whether you like it or not, that's the technological trend. We'll see whether AMD gets on board with FSR 3.0. If they don't, they'll be even more further behind than they are now.

I made this point in another thread, but A.I and machine learning were the reasons why the Google Pixel smartphones had the best cameras for a few years until Apple and Samsung decided to invest in A.I and machine learning as well. Those two technologies are extremely useful for any compute applications, so companies need to embrace them or get left behind.
 

KompuKare

Golden Member
Jul 28, 2009
1,016
934
136
You know that everything in 3d renderering is fake - it's all built on short cuts, approximations, and subtle trickery. The primary aim is to fake the best result possible given the limited processing power available, not produce the most accurate one. Given that there's nothing wrong with interpolated frames if they make the result look better.

For games you should just look at the end result when you are playing the game, not the process to generate it, and if it looks good it looks good - you shouldn't care what jiggery pokery was used to get there.
Maybe, but since we are not watching a pre-recorded video (where interpolation to the next frame using keyframes from the future is possible), then for Fake Frames there is no really way it will not add to user latency. Now adaptive sync we already have and that prevents tearing or the worry that displaying a frame for too long will cause issues.

So what can Fake Frames add this? It could try to guess based on what the player (or players in the case of multi-player) have done and might do in the future. But imagine it has looked at past frames and assume the next one will continue along the current trajectory, expect some player changes direct. Okay one wrong frame won't matter, except the next real frame will now expect to be on the new trajectory for two frames, possibly causing a jerk.

Can't see any way to resolve this.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
For games you should just look at the end result when you are playing the game, not the process to generate it, and if it looks good it looks good - you shouldn't care what jiggery pokery was used to get there.

Thing is though, using "jiggery poker" can take more development time and resources. Case in point, the mirror reflections in older games were done using another camera placed inside the mirror to view the character. All that hassle just to get a reflection in a mirror.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
It won't be that way for very long, once it gets properly optimized.

Because Lumen isn't really ray tracing, and ray tracing is the future of modern graphics. Lumen is just a stop gap solution. The goal is to eventually have full ray tracing.

Do you have a link? I'd like to look into this.

Yes, it is. But like many image technologies used in games, its rendering the 95% of the data which significantly improves performance because the last 5% is super expensive to calculate. HW RT is similar to when tessellation first came out. nVidia backed games put it on everything. Even if it did not need to be there. And then in games like Witcher 3, nVidia jacked up the tessellation factor to obscene high values not just to hurt AMD, but to hurt their own previous gen cards. But these days, tessellation is only used where it is needed, and at much more sane levels.

Lumen visually looks very similar, but at a significantly lesser cost. Just like some previous graphics technologies have been optimized to be much better today than they were years ago. And sure some of that optimization is better coding. But a lot of it is throwing out the overly precise bits to the point where it looks very close visually, but with significantly less work involved.

I watched the video, and the video implies that RT is still being processed on the GPU, but the BVH building is being done on the CPU with Intel Embree. Seems pretty normal to me, but I don't know if @Stuka87 was talking about something else.

It came out 3 years ago (2019 after I looked it up), so my memory of it was a bit off. But all the heavy work is done by the CPU. And it works on any GPU, even GPU's like a GTX 1660 that otherwise have no HW RT.
 

coercitiv

Diamond Member
Jan 24, 2014
6,214
11,961
136
Thing is though, using "jiggery poker" can take more development time and resources.
It took decades for real-time RT to be feasible, I think we can wait a bit more to have it done efficiently with a mix of software and hardware.

Meanwhile, not using "jiggery poker" is costing us an arm and a leg both in terms of prices and power consumption. "Moore's Law is Dead" so we decided we want to take the EXPENSIVE route to improving IQ in games. What could go wrong?
 

maddie

Diamond Member
Jul 18, 2010
4,749
4,691
136
Thing is though, using "jiggery poker" can take more development time and resources. Case in point, the mirror reflections in older games were done using another camera placed inside the mirror to view the character. All that hassle just to get a reflection in a mirror.

It took decades for real-time RT to be feasible, I think we can wait a bit more to have it done efficiently with a mix of software and hardware.

Meanwhile, not using "jiggery poker" is costing us an arm and a leg both in terms of prices and power consumption. "Moore's Law is Dead" so we decided we want to take the EXPENSIVE route to improving IQ in games. What could go wrong?
"jiggery poker" indeed. I see a bit of a similarity to the ornithopter vs conventional aircraft debate. AFAIK, the gimmick used by older games was quite efficient computationally vs the brute force RT method.

The argument "All that hassle just to get a reflection in a mirror." seems so ignorant of computational realities. As an aside, you need a universe to fully simulate a universe.