Why are people linking GPU-bound results in an argument about CPU gaming performance?
i2Hard tests with RT enabled. Unless you enable RT, CP2077 is mostly GPU-bound.I linked you one of each, 720p is probably not GPU bound, and another to show a different GPU vendor being used, with similar results.
Don't mind him, he is Grasping at Straws because Alder Lake is no longer the Gaming King. We all Knew that, AMD Knew that.. We just need to know just how faster it was.TPU says something you don't like is more accurate.
i2Hard tests with RT enabled. Unless you enable RT, CP2077 is mostly GPU-bound.
OK if you don't like CP2077, look at Troy: A Total War Saga. Most people in the benchmarking business have no idea what settings and scenes are CPU or GPU bound because they do not play the games, they're just making content.So they are the outlier then since nowhere else does use RT for CPU tests. Would be good if other places did to corroborate the tests but without that an outlier result is an outlier result.
OK if you don't like CP2077, look at Troy: A Total War Saga. Most people in the benchmarking business have no idea what settings and scenes are CPU or GPU bound because they do not play the games, they're just making content.
I suspect we will see at least one product with 170W, more thoughts on this below.I suspect part of it will be used by the new parts out of the gate. Intel pushing their chips beyond 200W let them benchmark a bit better. AMD doesn't need to go nearly that far, but an extra 50W will let them stretch the bars a bit more.
It also gives some extra room for all core boost on the two chiplet parts. Those were constrained a fair bit because when 16 cores all want to use that TDP and there's an IO die to consider as well, 125W doesn't go quite as far as one would think.
There's also the rumors that all of the Zen 4 CPUs will have a small amount of integrated graphics on the IO die this time around. That will add to the power draw if it's being used and they probably want extra room for it.
Eventually they will release Zen 4D parts and the v-cache is going to need to be powered. With the 5800X3D they lowered the boost clocks in part because the TDP needed to stay the same.
So I don't think there's just one reason for the 170W TDP, but a lot of little reasons. We may not even see the initial Zen 4 CPUs use all of that 170W either, but it will be there in case they need it later and it will ensure that boards will support future parts that actually will draw that full amount of power.
I find 170W regrettable but if your competitor is doing it you have to play that game. Really, AMD shouldn't follow but they will because halo parts work with irrational consumer behavior.
But it'll be really neat to see if the 5800X3D is as competitive as AMD claims while not using more power. On a 4 year old process. On an old platform. With old memory.
Intel will do whatever is required to get to number one DIY/OEM, 400W ? 500W ? 12900ks already uses 500w+.
Tunned DDR5 Worth it's salt are very expensive, MB are Very Expensive and 12900KS are very expensive. Can someone do a MB + DDR5 + 12900KS + 3090 Ti build against a Decent AM4 + DDR4 + 5800X3D + 3080 Ti and see what is the actual gaming/price difference?If those results are true, 5800X3D is a killer Gaming CPU at half the price of 12900K. If DDR5 is used the perf/$ is getting even worse for the AlderLake.
That's easily a $500 difference, even if you ignored the GPU.Tunned DDR5 Worth it's salt are very expensive, MB are Very Expensive and 12900KS are very expensive. Can someone do a MB + DDR5 + 12900KS + 3090 Ti build against a Decent AM4 + DDR4 + 5800X3D + 3080 Ti and see what is the actual gaming/price difference?
Maxed tweaked regular Zen3i2Hard tests with RT enabled. Unless you enable RT, CP2077 is mostly GPU-bound.



Please let's not Ignore the GPU.
If buying a 5800X3D would allow you to buy a 3080 Ti or 6900XT over a 3090 Ti, that is a game-changer right there.
I urge you to take a look at the video. TPU looks amateurish compared to them.
Why not both? I mean If buying a 5800X3D would allow you to game the same with a 6900XT than if you would with a 3090 Ti paired with a 12900K then why not?I believe you have those backwards, 5800XD + DDR4 will allow you to buy a faster GPU for the same price vs 12900KS+DDR5
Why not both? I mean If buying a 5800X3D would allow you to game the same with a 6900XT than if you would with a 3090 Ti paired with a 12900K then why not?
If those results are true, 5800X3D is a killer Gaming CPU at half the price of 12900K. If DDR5 is used the perf/$ is getting even worse for the AlderLake.
Are you Grasping at Straws again? The 5800X3D will be hailed as The New Gaming King...! Until the next new CPU is released. It's going to be a short reign.It's basically a tie with the 12900K. The biggest gain is in Borderlands 3, and there is a regression in CS:GO (as expected).
Well it's a good CPU, but not something that can claim to be the best gaming CPU.
It's basically a tie with the 12900K. The biggest gain is in Borderlands 3, and there is a regression in CS:GO (as expected).
Well it's a good CPU, but not something that can claim to be the best gaming CPU.
The lower clocks are really hurting it. In future iterations, maybe they can put smaller V-cache dies together with dark silicon in between to reduce the thermal impact and keep frequency high.Especially when it must be pretty conservative on clocks.
I got a feeling that the next iteration of V-cache will allow overclocking so the issue of lower clocks is just teething pains associated with implementing V-cache for the first time. Once they get more data/experience with V-cache, I think this will eventually sort itself out. Smaller V-caches won't be necessary.The lower clocks are really hurting it. In future iterations, maybe they can put smaller V-cache dies together with dark silicon in between to reduce the thermal impact and keep frequency high.
Possibly lower inter-core latency. The E-cores slow down the inter-core communication for the 12900K/KS.Is there something weird going on with TPU gaming numbers where on 1440 and 4K the older 1090k AND 12400K are often at the top of the chart? GPU Bottle Neck?
You're looking at a 4K chart and asking the if it isn't a bottleneck?Is there something weird going on with TPU gaming numbers where on 1440 and 4K the older 1090k AND 12400K are often at the top of the chart? GPU Bottle Neck?
View attachment 59908