• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Question Spiderman has entered the chat.

CPU intensive. Especially with ray tracing. Perhaps a sign of things to come with console ports? I am guessing this is the PS5 version?

I know it is 720p, but the 2600x and 10600k being on the struggle bus is mildly shocking.

https://www.computerbase.de/2022-08/spider-man-remastered-pc-benchmark-test/4/

I watched the Digital Foundry video, and the claim was that it is very heavy on the CPU doing asset decompression. And it's a bigger problem with RT enabled. Not sure what to make of it.

 
The real question should be, if you have a large enough and fast enough SSD, why don't they have it decompress while downloading, and then stream it in when playing? This would alleviate the load times, and downloads take a long time, so it could be happening during that process.

It being a bigger issue with raytracing is weird. Is this another BS situation where they also load in higher quality textures to claim how much better ray-tracing looks?

I really wish game developers/companies would be more transparent about this stuff. Wasn't supposedly multiple languages with uncompressed audio (because consoles were bad at decompression or didn't have the resources to spare for it on the fly - hence my earlier comment about decompressing during downloading) one of the claimed reasons for game sizes ballooning? Tell people and give them the option.
 
The real question should be, if you have a large enough and fast enough SSD, why don't they have it decompress while downloading, and then stream it in when playing? This would alleviate the load times, and downloads take a long time, so it could be happening during that process.

That would reduce your "effective bandwidth". You would be able to stream fewer assets per second.
 
1660387994594.png

I have the same question. Why does the game need to compile shaders continuously? I thought this was the graphics driver's job. So even if AMD/Nvidia/Intel pre-compile the shaders in their drivers, the game will still waste CPU cycles recompiling them?
 
Take a look at this article:


Long story short: the whole shader pipeline is mostly monolithic and there might be many combinations to compile.

And drivers have to compile each pipeline mostly separately.

So another console port which has not been optimized on Windows properly - what a surprise.
 
Take a look at this article:


Long story short: the whole shader pipeline is mostly monolithic and there might be many combinations to compile.

And drivers have to compile each pipeline mostly separately.

So another console port which has not been optimized on Windows properly - what a surprise.
What I see is the future further splitting between gaming and compute GPUs.

I know this is off topic, but this is why the argument about Intel dropping gaming GPUs and keeping compute GPUs funded might make sense. It appears that at a deep level the requirements are very different, leading to different designs in the future. All the major companies must know this.
 
Strange that the CPU rankings change between non-RT and RT with the 5800X3D winning without RT but being in 3rd place when RT is turned on:
sg0snio.png

Doesn't seem to be a thread thing as 5800X vs 5950X relative positions stay similar, and the 6C/12T i5-12500 comes in 2nd with RT.
UCk23sR.png

Still, I'd single out the 5800X3D and the 12500 as doing well here, as the 12900K is just too brute force for me (although no indication on it's power consumption running this game), whereas the 12500 usually pretty frugal.
 
Okay, all the more reasons for reviewers to include power numbers when reviewing this and while I love to see CPU only figures, in this case it sounds like the PCIe data transfers really need complete system power draws as well.
 
I watched the Digital Foundry video, and the claim was that it is very heavy on the CPU doing asset decompression. And it's a bigger problem with RT enabled. Not sure what to make of it.

This is what console ports are now until there are appropriate accelerators on GPUs. Both the major consoles have dedicated decompression HW.
 
Take a look at this article:


Long story short: the whole shader pipeline is mostly monolithic and there might be many combinations to compile.

And drivers have to compile each pipeline mostly separately.

So another console port which has not been optimized on Windows properly - what a surprise.

Yup, another sub par port for $60 no less. I paid $50 not long after it came out on ps4, 4 or so years ago. But hey, its got ray tracing!
 
Yup, another sub par port for $60 no less. I paid $50 not long after it came out on ps4, 4 or so years ago. But hey, its got ray tracing!
Yeah I played it on PS4 PRO years ago, so not double dipping until it's half price.

I feel like the bad port rhetoric is somewhat misplaced. Seems so far, to be a rare case where the consoles have hardware capability the PC doesn't is all. It scales well, and can be played on a modest PC for a PS4 like experience. Not what I'd call a terrible port personally. There are some bugs, but a patch should fix most of those shortly.

On another note: AMD GPUs not nose diving v RTX with ray tracing on is what I'd call interesting.
 
Had a ask around on the CB forums. Hopefully get their reviewer, Wolfgang Andermahr, to see if he has the CPU power usage figures too. Someone on the forum there did say while normally in games their CPU pulls 80-110W, in Spiderman it's more like 170-180W. Another poster said someone had measured closer to 200W for their 12900K. Or as they said: double the power draw of the 5800X3D to gain 2% extra performance. P4 Prescot is back?
 
So got to thinking it would be nice to get an idea what thread are running with and without RT, don't think any reviewers do those tests.

However, CB's CPU test don't cover this:
Nor do PurePC's:
Nor PCGH's:

The only place which at least looks at core utilization is that Russian site:
But then forget to mention what setting they are running in their core test 🙁
 
1660428391158.png

That VRAM consumption chart is weird. 4K on the 8GB cards is going way over the limit. 6600XT is bogged down with almost 13GB. It's using system RAM to compensate and that slows it down. 6900 XT has it real easy with 10GB while 6700 XT has it up to 11GB. It's like the slower the card, the more the VRAM gets used.
 
Take a look at this article:


Long story short: the whole shader pipeline is mostly monolithic and there might be many combinations to compile.

And drivers have to compile each pipeline mostly separately.

So another console port which has not been optimized on Windows properly - what a surprise.
It’s also on Stray. Basically every console port these days.
 
Back
Top