That doesn't seems to be the case with Godfall and Wolfenstein youngblood. The VRAM allocated difference between RT on and off is not very noticeable. With Godfall, you see a massive system RAM increase though. There's no RAM increase in Wolfenstein. Also, Cyberpunk requirement for RT is just a 2060 with 6GB, so I don't think RT (at least in the current form) use a massive amount of VRAM.
Godfall RT is working with NV cards, you can compare my benchmark video with the 6800xt benchmark. They look the same, and I can tell, the shadows are there when they're there. If something is written using a DX API, then any hardware that supports it will work. That's why the 6800s can run on existing RT games (minus a few bugs). Why the developer don't enable it for NV card? I don't know, might be some bug that I haven't hit or since it's an AMD sponsored game, they're giving AMD exclusivity for now. Since it's an Unreal engine game, the configuration file could be changed to enabled it. Also, the fact that all current RT titles, including the heavily used in Control could be runs on a 6GB 2060 says RT doesn't use a massive amount of VRAM.The RT in Godfall is very memory hog. You just don't see it, because it can't be activated on a GeForce in the currenct program version. No matter how hard to try, the service library is missing in your system. As you can see in the benchmark, those are not RT shadows, the overall quality is too low.
Godfall is working in a very special way. It uses DXR 1.1 to start the RT, and it will pass the work to Radeon Rays 4.0. But on your system the program can't do that, because the service library for the hit token is missing. So in the and you get back nothing, and every shader inside DXR will be a miss shader. It's really no point to turn on this, because the effect in it's current form is not implemented in a standard way. A standard RT shadow effect in Godfall would be too memory heavy. Your system just do some extra calculation for nothing.
The memory usage is depends on the implementation, if the lenght of the rays are minimal, than the effect won't consume a lot of memory, but otherwise it will.
Godfall RT is working with NV cards, you can compare my benchmark video with the 6800xt benchmark. They look the same, and I can tell, the shadows are there when they're there. If something is written using a DX API, then any hardware that supports it will work. That's why the 6800s can run on existing RT games (minus a few bugs). Why the developer don't enable it for NV card? I don't know, might be some bug that I haven't hit or since it's an AMD sponsored game, they're giving AMD exclusivity for now. Since it's an Unreal engine game, the configuration file could be changed to enabled it. Also, the fact that all current RT titles, including the heavily used in Control could be runs on a 6GB 2060 says RT doesn't use a massive amount of VRAM.
See the video where my character is standing at the start of the video. Compare it against the gameplay video of the 6800xt, it's the same. And I have compared it more in game in different areas and angles. It's the same. Also, even in the 6800xt the allocated memory is not maxed out. You're wrong about how much rays Godfall uses, it really don't use that much. It's not very impressive at all. Cyberpunk will be coming out in less than 2 weeks, hopefully (and I think it will) it will be a better representation of next gen graphics than Godfall.I saw it, and the RT shadow is missing from the game itself.
The problem is that the RT in Godfall is not implemented inside the DXR API. It's an interop implementation that do the job outside DirectX. You can check the library file inside the game folder: amdrtshadow.dll.
I wrote it earlier. A standard DXR 1.1 implementation would need much more VRAM. 18-20 GB even on 1440p. So there is no point to enable it on NVIDIA, when most of their cards are not have this much memory. There will be some optimization later, or maybe NVIDIA will allows the hit token via NVAPI, and an RT implementation could be possible in OptiX, but not today.
You can change the configuration file, but every hit token goes outside DXR. That's why an amdrtshadow.dll file used inside the game folder. NVIDIA can't use it, so while the shaders will start, they don't get back any results, and every job will start miss shader. That's why there is no RT shadows in your video.
The RT implementations can be different. The older games configured a very limited length for the rays. Godfall configured it's RT effect for much bigger scenes, so the length of the rays are longer, and this causes much bigger memory usage.
See the video where my character is standing at the start of the video. Compare it against the gameplay video of the 6800xt, it's the same. And I have compared it more in game in different areas and angles. It's the same. Also, even in the 6800xt the allocated memory is not maxed out. You're wrong about how much rays Godfall uses, it really don't use that much. It's not very impressive at all. Cyberpunk will be coming out in less than 2 weeks, hopefully (and I think it will) it will be a better representation of next gen graphics than Godfall.
I'm developing on C++ under Ubuntu. My application has a lot of threads (more than 25).
My question is:
- If I will run my app on the graphic card (which is known as CPU with lot of cores), will I see an improvement in the performance?
- I know that graphic cards have more cores than I7 (for example), so my app will has less context switches and thus better performance, is that true?
- Do the cores of the graphic cards (in the basic) have a better performance than the CPU cores? I know that it depends on the graphic card and CPU type, but is there a general answer?
And you don't think it would take years to develop this "next-gen" engine and then the game? All that matters is how well the game looks. The advantage of Ampere over Turing is basically performance, there's nothing new in terms of features. So, as long as the engine scales with the hardware (more rays, enable dlss, variable shaders), it's "next-gen". RDNA2 on the PC side didn't add much either, it doesn't have the geometry engine of the ps5. SAM is hit or miss and even when on it doesn't add much. So, I don't see anything revolutionary coming out anytime soon. I think the consoles were way overhyped, when people dig into it like what DF do with the game reviews, they see that it's just ok when compared to what PC gaming is like. Only the new Horizon dawn game look impressive but at the end of the DF break down, he cautioned that we might just be seeing cinematic and no clue what actual game play might look like. Happy to be wrong, but I think we got fooled with this "next-gen".CP2077 was developed as a last-gen title and was intended to be released long ago. I'm sure the graphics look great, but I don't think it will be a good representation of what true next-gen game engines written to take full advantage of PS5, XSX, RDNA2, or even Ampere will be capable of.
I still don't see it. You have the normal shadows.See the video where my character is standing at the start of the video. Compare it against the gameplay video of the 6800xt, it's the same. And I have compared it more in game in different areas and angles. It's the same. Also, even in the 6800xt the allocated memory is not maxed out. You're wrong about how much rays Godfall uses, it really don't use that much. It's not very impressive at all. Cyberpunk will be coming out in less than 2 weeks, hopefully (and I think it will) it will be a better representation of next gen graphics than Godfall.
It has the same geometry engine. This unit is also identical on Xbox Series S and X. The APIs are different, but the underlying hardware is the same.RDNA2 on the PC side didn't add much either, it doesn't have the geometry engine of the ps5.
I'm not sure what you're looking at. Here's my original post.I still don't see it. You have the normal shadows.
Certainly, the allocated memory is not needed. But this is the problem of the PC platform. The memory management is inefficient, and the memory usage is much higher than the theoretical needs. So even if you don't need a lot of data, it has to bo loaded inside the memory. In the end you need more memory, because the program won't run, and there is no solution on PC to get around the OS limitations. On the consoles this is much better, because the OS won't write inside the app memory, so the program can do anything.
Godfall uses one ray per pixel, with one secondary ray. This is normal or above normal. But the problem is the length of the ray, Godfall is very heavy on this parameter compared to other titles, and this is why it needs a lot of VRAM. Limiting the length of the ray will save a lot of memory.
The benchmark is bugged with RT on. If I run the benchmark back to back, the 2nd time is significantly worse, and the 3rd is even worse, etc. But in game seems to be okay. I made 2 videos comparing RT on and off for my 3070 using Shadowplay, there does appear to be some performance hit when I played in game with this on. Please watch in HDR if your monitor supports it, if YT rendered it correctly, you should get at least 1440p. HDR makes much more of an impact visually than RT.
This is without RT.
This is with RT turned on via the config file change.
You can compare this to someone with a RX 6800 XT to confirm the RT setting is working.
https://www.youtube.com/watch?v=7QwdbyVEJhY
It seems that with RT, you lose a lot of the "fake" shadows? There's just a lot less for some reason. Maybe it's more realistic? Or the game is not using a lot of rays or very limited. Perhaps that's why the performance hit is not that significant. But in any event, I don't like it, I prefer the "fake" shadows. This game is unplayable with Epic settings unfortunately for me, have to drop to high to get 60 fps. It really is somewhat of a bait and switch, the game was performing really well in the tutorial section of the game. Once you start the real story, fps dropped significantly. Getting back on topic, vram does not appear to be the problem, or I haven't ran into it yet. The game just perform really bad at high resolution, it really needs DLSS even without RT.
I just noticed something in my videos and the 6800XT one, computer RAM usage is significantly higher with RT on. Another mystery with this game. I think I'm going to hold off playing more of this game until the developers can tune it, it really needs a 30% improvement.
You're the only person who says that. It's widely known that PS5 has this custom hw. RDNA3 is expected to have this.It has the same geometry engine. This unit is also identical on Xbox Series S and X. The APIs are different, but the underlying hardware is the same.
I see your post, but there are no RT shadows in the video.I'm not sure what you're looking at. Here's my original post.
I saw a lot of PS5 in the last year. I had almost every early sample, and I can safely say that RedGamingTech has no clue what they talking about.You're the only person who says that. It's widely known that PS5 has this custom hw. RDNA3 is expected to have this.
![]()
Playstation 5 | RDNA 2 GPU Features, OS Reserves, Geometry Engine | EXCLUSIVE
Sony’s Playstation 5, much like the Xbox Series of consoles from Microsoft, is an interesting beast, because the paradigms of performance aren’t just raw Cwww.redgamingtech.com
Anyone care to speculate at the MSRP of a 3080 Ti with >10 GB of vram?
( We all know it is coming, sometime, next summer. It is not like we will be able to buy anything until next summer anyway. )
Nvidia recommends 10GB VRAM for max settings in Cyberpunk. We've already hit the limit.
View attachment 35446
I'm on that discord bot too yet I didn't see any notification for amazon. I saw some newegg alerts for the combos.Was finally able to score an EVGA 3080 FTW3 Ultra off Amazon last night. The Discord Notification bot finally worked for me. Not going to wait for the 3080ti anymore.
Was finally able to score an EVGA 3080 FTW3 Ultra off Amazon last night. The Discord Notification bot finally worked for me. Not going to wait for the 3080ti anymore.
Yea that sucks, was hoping it was sooner. I want to upgrade at some point and it looks like I don't want to wait that long. Already have been waiting the past 2 years.The 3080ti is supposedly pushed to Feb 11-17 now. I don't think it will be easily available until the summer, the way things are going with these cards.
Did you collaborate on Tom's Hardware article, Ray-tracing and end of life regrets?Desperate times regarding many consumer products. Seriously, it's almost impossible to find many products in totally unrelated hobbies. It makes me feel like the world might be coming to an end, but we must ask ourselves if it's worth living in a world where an $800 GPU only has10GB of RAM. I think not. Revolution.
Need to sign on a discord and keep an eye out, I have been able to order multiple 3070s/3080s/3090s using that ..canceled most since I am not a scalper and my day job makes it not worth the hassle thankfully trying to resell for a small profit. Waiting on the 3080 Ti release though, hopefully I can snag one at launch and then my 3080 will be in the FS section near cost.Same here. There are only a few games I want a new card for (Cyberpunk, Control and FS2020), and I have more than enough older games in my play queue. I actually want HDMI 2.1 more than anything else so I can use VRR and 120hz on the desktop, which a 3080 would do just as well. I wouldn't even mind buying a 3090 (if I could actually find one), but see no practical benefit in it over the cheaper cards.
For an announcement or "availability"? Seems crazy early to drop a ti already.The 3080ti is supposedly pushed to Feb 11-17 now. I don't think it will be easily available until the summer, the way things are going with these cards.