I absolutely don't disagree with your source. It may be OK (beside the weird 8400 figures...).A kind reminder you made the original claim with zero evidence to back it up. If you disagree with my source, bring better data.
Games don't use many cores, so it's not surprising that a 10-core CPU loses to another Skylake - with 4 cores but clocked 10% higher...I suggest you check the numbers again. The 7900X is a 10c/20t CPU and in 720p it loses by 5% to 8350K which is a 4c/4t @ 4Ghz and by 10% to 8600K which is a 6c/6t CPU @ 4.1-4.2Ghz in MT loads.
I already said 8400 is an outlier in this comparison.Even overclocked to 4.5Ghz it loses to 8400 @ 3.9GHz. It doesn't get any more lackluster than that.
Maybe it is, maybe it isn't. I'm still looking forward to seeing your source. 🙂Yes, it is more than 10% slower. 😉
Nope, the "bad for gaming" part is all in your head, which kinda funny considering the situation 🙂But you claimed that 7900X is bad for gaming because of mesh and you attached a review that doesn't even show that it's bad for gaming.
Skylake-X uses a mesh interconnect and I don't think it really made it lag behind similarly clocked ring bus chips.
So you see, not "bad for gaming", just mediocre in gaming.Skylake-X performance in games is notoriously lackluster.
Ok, ELI5 time.you attached a review that doesn't even show that it's bad for gaming.
So you see, not "bad for gaming", just mediocre in gaming.
Ok, ELI5 time.
- In that review the 7900X was tested at stock clocks and also at 4.5Ghz.
- In the same review the 8400 ran at 3.8-3.9Ghz and the 8600K rand at 4.1-4.2Ghz. Both of these CPU performed better in games than the overclocked 7900X.
Just to recap...Nope, the "bad for gaming" part is all in your head, which kinda funny considering the situation 🙂
Quick recap:
So you see, not "bad for gaming", just mediocre in gaming.
Ok, ELI5 time.
Now, here's the truth, and I'm sorry if it's gonna hurt a little: 4.5Ghz > 4.2Ghz.
- In that review the 7900X was tested at stock clocks and also at 4.5Ghz.
- In the same review the 8400 ran at 3.8-3.9Ghz and the 8600K rand at 4.1-4.2Ghz. Both of these CPU performed better in games than the overclocked 7900X.
This means the 7900X lagged behind lower clocked ring bus chips. Some may say it was the mesh interconnect, some may say it was the tooth fairy.
(some also clocked the mesh higher and saw improved gaming experience, but that's a story for another thread)
the 8700K in that review also ran around 4.3-4.5Ghz, since the all-core boost is 4.3Ghz and the 3-thread boost is 4.5Ghz
Skylake-X performance in games is notoriously lackluster.
So a $3,000 CPU does about the same as a $435 cpu(3900x) in games ???I would actually disagree with that. Even at stock clocks this is debatable depending on what type of games you're playing. But when overclocked, Skylake-X can be beastly. Games with well threaded engines will perform very well on Skylake-X compared to games which still use only about 4 threads or so. But if you overclock the mesh, the memory and the CPU, then it can even outperform Coffee Lake on a clock for clock basis, even when the latter is also overclocked. Here is Gamersnexus' review of the Intel 10980xe at 4.9ghz. It's a much more recent review than the one you cited, and it uses some fairly new titles with a few exceptions. The worst performer for the 10980xe out of the entire selection was Total Warhammer 2, but that is because the game uses no more than 4 threads.
Gaming benchmarks start at 17:44:
Also, another review of the 10980xe from PCgh.de but at stock clocks. As I said before, depending on the game and its capabilities in regards to multithreading, will have a huge impact on Skylake-X's performance. BF5 which uses the latest iteration of the Frostbite 3 engine and is well threaded and CPU intensive actually performs better on the 10980xe and even the Threadripper 3xxx CPUs than their mainstream counterparts.
This progression towards low level APIs and higher levels of parallelism in game engines will ensure that a consumer that decides to buy high core count CPUs will not be punished with lackluster gaming performance.
Some may say it was the mesh interconnect, some may say it was the tooth fairy.
So a $3,000 CPU does about the same as a $435 cpu(3900x) in games ???
Come on, get real.
Well, for $2000 I can get a 3970x and blow the doors off that 10980xe in gaming and everything else !So we're talking about price now? I thought we were only discussing performance. Of course no one in their right mind is going to buy a 10980xe for gaming. It's a productivity CPU, that can potentially be very good at gaming if you tweak it, especially for modern games that are more parallel. That's all I'm saying.
10980XE is $1000, and no Threadripper 3000 doesn't "blow the doors off" it in gaming after you've properly tweaked the CPU.Well, for $2000 I can get a 3970x and blow the doors off that 10980xe in gaming and everything else !
Again, come on, get real here.....
Well, I used newegg, couldn;t find it on amazon, then found it for $1200 something, somewhere..10980XE is $1000, and no Threadripper 3000 doesn't "blow the doors off" it in gaming after you've properly tweaked the CPU.
Perf/$ is not really relevant when you're talking $1000+ chips because performance doesn't scale with price.But I still bet it looses perf/$
When overclocked and when running modern optimized games it does perform well, but we need to keep in mind the following:I would actually disagree with that. Even at stock clocks this is debatable depending on what type of games you're playing. But when overclocked, Skylake-X can be beastly. Games with well threaded engines will perform very well on Skylake-X compared to games which still use only about 4 threads or so.
When SKL-X first appeared in the review radar, people on this forum speculated that the mesh was at it's first iteration, and that further optimization based on clock/cache increase will help alleviate problems with consumer workloads (latency goes down, cache misses go down). Unfortunately the 10nm drought followed and we have yet to see the next generation of Intel server CPUs that will power the next HEDT generation as well.
There is a good argument to be made here: as more games adapt to many-core CPUs, relative performance on high throughput chips will increase despite their architectural "weakness". We sacrifice latency for core count, there has to be a performance threshold. The way that SKL-X and even Zen1-2 chips perform in games today may not accurately reflect future game performance, in the sense that they are likely to age better than we expect them to.
Where?10980XE is $1000, and no Threadripper 3000 doesn't "blow the doors off" it in gaming after you've properly tweaked the CPU.
Buy it and you can sell it for double the next week for some lunatic. No real retailer actually sells or ships the thing.Well, I used newegg, couldn;t find it on amazon, then found it for $1200 something, somewhere..
So I guess I will let someone else reply who may know better than me. But I still bet it looses perf/$
When overclocked and when running modern optimized games it does perform well, but we need to keep in mind the following:
I cannot stress this enough: my intervention in firs page was accurately pointed towards the claim that the mesh interconnect was not weaker than the ring bus in games. All one needs to do to disprove that is to find a meaningful category of games where the mesh fails to deliver. Everything else is just further discussion on the topic. (which may actually be very interesting as long as we keep it somewhat related to thread topic)
- it has obvious weak spots with a number of other games even against much lower clocked CPUs from the Skylake army
- when overclocked we need to compare against overclocked Skylake, in which case it won't even have clock parity anymore
When SKL-X first appeared in the review radar, people on this forum speculated that the mesh was at it's first iteration, and that further optimization based on clock/cache increase will help alleviate problems with consumer workloads (latency goes down, cache misses go down). Unfortunately the 10nm drought followed and we have yet to see the next generation of Intel server CPUs that will power the next HEDT generation as well.
There is a good argument to be made here: as more games adapt to many-core CPUs, relative performance on high throughput chips will increase despite their architectural "weakness". We sacrifice latency for core count, there has to be a performance threshold. The way that SKL-X and even Zen1-2 chips perform in games today may not accurately reflect future game performance, in the sense that they are likely to age better than we expect them to.
Aren't the mesh and new cache structure semi-dependent on each other to accomplish the goal of more uniform access time across a many-core monolithic die? (not a rhetorical question, [later edit] it was my understanding that the new cache structure was dictated by the new mesh arrangement, except maybe for the much larger L2)People are also quick to blame the mesh and ignore the cache structure. Sure, the L2 got bumped up nicely to 1MB/core, but L3 went down and more importantly is non-inclusive. Games (and other software) seem to love all inclusive caches. My guess is because there is less cache snooping.

Huh, how comes Threadripper 2970WX/2990WX do so well there, compared to all Zen chips but especially 2950X which fares nearly 50% worse? Did I miss something?The best info we can still find is probably memory latency:
![]()
It might be streching a bit, but it can't be a lot as the productivity benches improved a lot, only gaming didn't.I love how people just make assertions like "So it has to be the memory subsystem!" - there are million things going on inside these CPUs, and unless you have all of the performance characterizing tools running (as in, watching the code as it executes on the processor) you have no idea where a bottleneck might be. There's too much nonsense going around about "latency" without any kind of performance characterizations backing those assertions up.
Plus, Zen OCing is sketchy at best. Remember the issues when Zen 2 first dropped, where people were doing massive undervolts and keeping the same clocks, but performance was dropping? The CPU was internally clock gating/stretching, so the performance was going down even though the apparent clock stayed the same. How do we know this 5 GHz OC isn't clock stretching either?
Must be running game/creator mode in different setups.Huh, how comes Threadripper 2970WX/2990WX do so well there, compared to all Zen chips but especially 2950X which fares nearly 50% worse? Did I miss something?
From Anandtech running SPEC2017 on Renoir(https://www.anandtech.com/show/1576...4-review-swift-gets-swifter-with-ryzen-4000/3):I love how people just make assertions like "So it has to be the memory subsystem!" - there are million things going on inside these CPUs, and unless you have all of the performance characterizing tools running (as in, watching the code as it executes on the processor) you have no idea where a bottleneck might be. There's too much nonsense going around about "latency" without any kind of performance characterizations backing those assertions up.
Plus, Zen OCing is sketchy at best. Remember the issues when Zen 2 first dropped, where people were doing massive undervolts and keeping the same clocks, but performance was dropping? The CPU was internally clock gating/stretching, so the performance was going down even though the apparent clock stayed the same. How do we know this 5 GHz OC isn't clock stretching either?
andRenoir showcases the biggest increases in workloads such as 548.exchange2_r and 525.x264_r which are back-end execution bound workloads, and the microarchitectural improvements here help a lot.
On the other hand, the weakest improvements are seen in workloads such as 520.omnetpp_r – this test is mostly memory latency bound and unfortunately the new chip here barely just matches its predecessor. The same can be said about 505.mcf_r where the improvements are quite meager.
It is clear that the memory subsystem is a problem since SPEC2017 workloads are too big to fit in L3.In SPECfp2017, these are floating point heavier test workloads. The generational increases here are also relatively smaller, with even an odd regression in 527.cam4_r. The Intel chip still has a lead across the board, and with particular large gaps in the more memory heavy workloads such as 519.lbm_r and 549.fotonik3d_r.