Discussion [Video]Ryzen 7 3800X 5GHz vs. Core i9 9900K 5 GHz

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

tamz_msc

Diamond Member
Jan 5, 2017
3,439
3,358
136

Skylake wins in everything gaming except WoW 1% and 0.1% lows. This proves that the bottleneck in Zen 2 isn't frequency, but rather the memory subsystem. Hopefully Zen 3 addresses this, otherwise Intel would have nothing to worry about.
 
Last edited:

piokos

Senior member
Nov 2, 2018
554
206
86
A kind reminder you made the original claim with zero evidence to back it up. If you disagree with my source, bring better data.
I absolutely don't disagree with your source. It may be OK (beside the weird 8400 figures...).
But you claimed that 7900X is bad for gaming because of mesh and you attached a review that doesn't even show that it's bad for gaming. :)

How do you expect me to "bring better data"? I'm not trying to prove anything.
I can post the same link you did.

I'm not even saying you're wrong. I merely point out that you failed to support that claim with data. :)
I suggest you check the numbers again. The 7900X is a 10c/20t CPU and in 720p it loses by 5% to 8350K which is a 4c/4t @ 4Ghz and by 10% to 8600K which is a 6c/6t CPU @ 4.1-4.2Ghz in MT loads.
Games don't use many cores, so it's not surprising that a 10-core CPU loses to another Skylake - with 4 cores but clocked 10% higher...
Even overclocked to 4.5Ghz it loses to 8400 @ 3.9GHz. It doesn't get any more lackluster than that.
I already said 8400 is an outlier in this comparison.
Look at 8400 vs 8600K.
Yes, it is more than 10% slower. ;)
Maybe it is, maybe it isn't. I'm still looking forward to seeing your source. :)
 

coercitiv

Diamond Member
Jan 24, 2014
5,683
9,904
136
But you claimed that 7900X is bad for gaming because of mesh and you attached a review that doesn't even show that it's bad for gaming.
Nope, the "bad for gaming" part is all in your head, which kinda funny considering the situation :)

Quick recap:
Skylake-X uses a mesh interconnect and I don't think it really made it lag behind similarly clocked ring bus chips.
Skylake-X performance in games is notoriously lackluster.
So you see, not "bad for gaming", just mediocre in gaming.

you attached a review that doesn't even show that it's bad for gaming.
Ok, ELI5 time.
  • In that review the 7900X was tested at stock clocks and also at 4.5Ghz.
  • In the same review the 8400 ran at 3.8-3.9Ghz and the 8600K rand at 4.1-4.2Ghz. Both of these CPU performed better in games than the overclocked 7900X.
Now, here's the truth, and I'm sorry if it's gonna hurt a little: 4.5Ghz > 4.2Ghz.

This means the 7900X lagged behind lower clocked ring bus chips. Some may say it was the mesh interconnect, some may say it was the tooth fairy.
(some also clocked the mesh higher and saw improved gaming experience, but that's a story for another thread)

the 8700K in that review also ran around 4.3-4.5Ghz, since the all-core boost is 4.3Ghz and the 3-thread boost is 4.5Ghz
 
Last edited:

Gideon

Golden Member
Nov 27, 2007
1,534
3,251
136
So you see, not "bad for gaming", just mediocre in gaming.

Ok, ELI5 time.
  • In that review the 7900X was tested at stock clocks and also at 4.5Ghz.
  • In the same review the 8400 ran at 3.8-3.9Ghz and the 8600K rand at 4.1-4.2Ghz. Both of these CPU performed better in games than the overclocked 7900X.

To reinforce the point with a bit newer Skylake HEDT chip. The 9980XE is a bit worse in gaming than 3700X despite having a slightly higher boost speed. So everything said about the HEDT chips also applies to Ryzens and then some.
 
  • Like
Reactions: lightmanek

amrnuke

Golden Member
Apr 24, 2019
1,175
1,767
106
Nope, the "bad for gaming" part is all in your head, which kinda funny considering the situation :)

Quick recap:


So you see, not "bad for gaming", just mediocre in gaming.


Ok, ELI5 time.
  • In that review the 7900X was tested at stock clocks and also at 4.5Ghz.
  • In the same review the 8400 ran at 3.8-3.9Ghz and the 8600K rand at 4.1-4.2Ghz. Both of these CPU performed better in games than the overclocked 7900X.
Now, here's the truth, and I'm sorry if it's gonna hurt a little: 4.5Ghz > 4.2Ghz.

This means the 7900X lagged behind lower clocked ring bus chips. Some may say it was the mesh interconnect, some may say it was the tooth fairy.
(some also clocked the mesh higher and saw improved gaming experience, but that's a story for another thread)

the 8700K in that review also ran around 4.3-4.5Ghz, since the all-core boost is 4.3Ghz and the 3-thread boost is 4.5Ghz
Just to recap...

6700K @ 4.0 GHz > 7900X @ 4.5 GHz
7700K @ 4.2 GHz > 7900X @ 4.5 GHz
8600K @ 3.6 GHz > 7900X @ 4.5 GHz

That's definitely lackluster.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Skylake-X performance in games is notoriously lackluster.

I would actually disagree with that. Even at stock clocks this is debatable depending on what type of games you're playing. But when overclocked, Skylake-X can be beastly. Games with well threaded engines will perform very well on Skylake-X compared to games which still use only about 4 threads or so. But if you overclock the mesh, the memory and the CPU, then it can even outperform Coffee Lake on a clock for clock basis, even when the latter is also overclocked. Here is Gamersnexus' review of the Intel 10980xe at 4.9ghz. It's a much more recent review than the one you cited, and it uses some fairly new titles with a few exceptions. The worst performer for the 10980xe out of the entire selection was Total Warhammer 2, but that is because the game uses no more than 4 threads.

Gaming benchmarks start at 17:44:


Also, another review of the 10980xe from PCgh.de but at stock clocks. As I said before, depending on the game and its capabilities in regards to multithreading, will have a huge impact on Skylake-X's performance. BF5 which uses the latest iteration of the Frostbite 3 engine and is well threaded and CPU intensive actually performs better on the 10980xe and even the Threadripper 3xxx CPUs than their mainstream counterparts.

This progression towards low level APIs and higher levels of parallelism in game engines will ensure that a consumer that decides to buy high core count CPUs will not be punished with lackluster gaming performance.
 
  • Like
Reactions: lightmanek

Markfw

CPU Moderator, VC&G Moderator, Elite Member
Super Moderator
May 16, 2002
24,609
13,716
136
I would actually disagree with that. Even at stock clocks this is debatable depending on what type of games you're playing. But when overclocked, Skylake-X can be beastly. Games with well threaded engines will perform very well on Skylake-X compared to games which still use only about 4 threads or so. But if you overclock the mesh, the memory and the CPU, then it can even outperform Coffee Lake on a clock for clock basis, even when the latter is also overclocked. Here is Gamersnexus' review of the Intel 10980xe at 4.9ghz. It's a much more recent review than the one you cited, and it uses some fairly new titles with a few exceptions. The worst performer for the 10980xe out of the entire selection was Total Warhammer 2, but that is because the game uses no more than 4 threads.

Gaming benchmarks start at 17:44:


Also, another review of the 10980xe from PCgh.de but at stock clocks. As I said before, depending on the game and its capabilities in regards to multithreading, will have a huge impact on Skylake-X's performance. BF5 which uses the latest iteration of the Frostbite 3 engine and is well threaded and CPU intensive actually performs better on the 10980xe and even the Threadripper 3xxx CPUs than their mainstream counterparts.

This progression towards low level APIs and higher levels of parallelism in game engines will ensure that a consumer that decides to buy high core count CPUs will not be punished with lackluster gaming performance.
So a $3,000 CPU does about the same as a $435 cpu(3900x) in games ???

Come on, get real.
 

Shmee

Memory and Storage, Graphics Cards
Super Moderator
Sep 13, 2008
6,657
1,866
136
I wonder if my 8 core Xeon 1660 V3 @ 4.3 GHz is better in games then Skylake X. If so this definitely shows a regression in performance per clock from X99 to X299.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
So a $3,000 CPU does about the same as a $435 cpu(3900x) in games ???

Come on, get real.

So we're talking about price now? I thought we were only discussing performance. Of course no one in their right mind is going to buy a 10980xe for gaming. It's a productivity CPU, that can potentially be very good at gaming if you tweak it, especially for modern games that are more parallel. That's all I'm saying.
 

Markfw

CPU Moderator, VC&G Moderator, Elite Member
Super Moderator
May 16, 2002
24,609
13,716
136
So we're talking about price now? I thought we were only discussing performance. Of course no one in their right mind is going to buy a 10980xe for gaming. It's a productivity CPU, that can potentially be very good at gaming if you tweak it, especially for modern games that are more parallel. That's all I'm saying.
Well, for $2000 I can get a 3970x and blow the doors off that 10980xe in gaming and everything else !

Again, come on, get real here.....
 
  • Love
Reactions: spursindonesia

tamz_msc

Diamond Member
Jan 5, 2017
3,439
3,358
136
Well, for $2000 I can get a 3970x and blow the doors off that 10980xe in gaming and everything else !

Again, come on, get real here.....
10980XE is $1000, and no Threadripper 3000 doesn't "blow the doors off" it in gaming after you've properly tweaked the CPU.
 

Markfw

CPU Moderator, VC&G Moderator, Elite Member
Super Moderator
May 16, 2002
24,609
13,716
136
10980XE is $1000, and no Threadripper 3000 doesn't "blow the doors off" it in gaming after you've properly tweaked the CPU.
Well, I used newegg, couldn;t find it on amazon, then found it for $1200 something, somewhere..

So I guess I will let someone else reply who may know better than me. But I still bet it looses perf/$
 

coercitiv

Diamond Member
Jan 24, 2014
5,683
9,904
136
I would actually disagree with that. Even at stock clocks this is debatable depending on what type of games you're playing. But when overclocked, Skylake-X can be beastly. Games with well threaded engines will perform very well on Skylake-X compared to games which still use only about 4 threads or so.
When overclocked and when running modern optimized games it does perform well, but we need to keep in mind the following:
  • it has obvious weak spots with a number of other games even against much lower clocked CPUs from the Skylake army
  • when overclocked we need to compare against overclocked Skylake, in which case it won't even have clock parity anymore
I cannot stress this enough: my intervention in firs page was accurately pointed towards the claim that the mesh interconnect was not weaker than the ring bus in games. All one needs to do to disprove that is to find a meaningful category of games where the mesh fails to deliver. Everything else is just further discussion on the topic. (which may actually be very interesting as long as we keep it somewhat related to thread topic)

When SKL-X first appeared in the review radar, people on this forum speculated that the mesh was at it's first iteration, and that further optimization based on clock/cache increase will help alleviate problems with consumer workloads (latency goes down, cache misses go down). Unfortunately the 10nm drought followed and we have yet to see the next generation of Intel server CPUs that will power the next HEDT generation as well.

There is a good argument to be made here: as more games adapt to many-core CPUs, relative performance on high throughput chips will increase despite their architectural "weakness". We sacrifice latency for core count, there has to be a performance threshold. The way that SKL-X and even Zen1-2 chips perform in games today may not accurately reflect future game performance, in the sense that they are likely to age better than we expect them to.

 

Gideon

Golden Member
Nov 27, 2007
1,534
3,251
136
When SKL-X first appeared in the review radar, people on this forum speculated that the mesh was at it's first iteration, and that further optimization based on clock/cache increase will help alleviate problems with consumer workloads (latency goes down, cache misses go down). Unfortunately the 10nm drought followed and we have yet to see the next generation of Intel server CPUs that will power the next HEDT generation as well.

There is a good argument to be made here: as more games adapt to many-core CPUs, relative performance on high throughput chips will increase despite their architectural "weakness". We sacrifice latency for core count, there has to be a performance threshold. The way that SKL-X and even Zen1-2 chips perform in games today may not accurately reflect future game performance, in the sense that they are likely to age better than we expect them to.

This raises an interesting topic. Does setting the mesh-clock of Skylake-X as a multiple of the memory clock change the memory latency (e.g. is the latency lower with 1:1 or 1:2 ratio vs say 1:1.213...). If not, then AMD could aso theoretically decouplee the Zen 3 FCLK in a way where it's less sensitive to memory speed ratio than zen 2.

Here is an interesting HWUB video about mesh overclocking on 7800X. Going to 3 Ghz barely moved the needle on FPS:

It is however interesting that Intel mesh can be overclocked to ~3.2 Ghz with measurable Latency improvements. I really hope AMD can also at least go over 2.1-2.2 Ghz on Zen3.
 

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
10980XE is $1000, and no Threadripper 3000 doesn't "blow the doors off" it in gaming after you've properly tweaked the CPU.
Where?
By that I mean show me a place where you can buy it or order it with an actual shipping date that is in this century.

It's not a real price cut when you practically discontinue the product.
 
Last edited:

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
Well, I used newegg, couldn;t find it on amazon, then found it for $1200 something, somewhere..

So I guess I will let someone else reply who may know better than me. But I still bet it looses perf/$
Buy it and you can sell it for double the next week for some lunatic. No real retailer actually sells or ships the thing.
 
  • Like
Reactions: Markfw

Thunder 57

Platinum Member
Aug 19, 2007
2,208
2,867
136
When overclocked and when running modern optimized games it does perform well, but we need to keep in mind the following:
  • it has obvious weak spots with a number of other games even against much lower clocked CPUs from the Skylake army
  • when overclocked we need to compare against overclocked Skylake, in which case it won't even have clock parity anymore
I cannot stress this enough: my intervention in firs page was accurately pointed towards the claim that the mesh interconnect was not weaker than the ring bus in games. All one needs to do to disprove that is to find a meaningful category of games where the mesh fails to deliver. Everything else is just further discussion on the topic. (which may actually be very interesting as long as we keep it somewhat related to thread topic)

When SKL-X first appeared in the review radar, people on this forum speculated that the mesh was at it's first iteration, and that further optimization based on clock/cache increase will help alleviate problems with consumer workloads (latency goes down, cache misses go down). Unfortunately the 10nm drought followed and we have yet to see the next generation of Intel server CPUs that will power the next HEDT generation as well.

There is a good argument to be made here: as more games adapt to many-core CPUs, relative performance on high throughput chips will increase despite their architectural "weakness". We sacrifice latency for core count, there has to be a performance threshold. The way that SKL-X and even Zen1-2 chips perform in games today may not accurately reflect future game performance, in the sense that they are likely to age better than we expect them to.

People are also quick to blame the mesh and ignore the cache structure. Sure, the L2 got bumped up nicely to 1MB/core, but L3 went down and more importantly is non-inclusive. Games (and other software) seem to love all inclusive caches. My guess is because there is less cache snooping.
 
  • Like
Reactions: lightmanek

coercitiv

Diamond Member
Jan 24, 2014
5,683
9,904
136
People are also quick to blame the mesh and ignore the cache structure. Sure, the L2 got bumped up nicely to 1MB/core, but L3 went down and more importantly is non-inclusive. Games (and other software) seem to love all inclusive caches. My guess is because there is less cache snooping.
Aren't the mesh and new cache structure semi-dependent on each other to accomplish the goal of more uniform access time across a many-core monolithic die? (not a rhetorical question, [later edit] it was my understanding that the new cache structure was dictated by the new mesh arrangement, except maybe for the much larger L2)

A quick recap: in terms of intercore latency SKL-X looked better than Zen1 at launch, but not better than ring bus Broadwell-E, Haswell-E and especially consumer Kaby Lake.

186a-latency-pingtimes.png

We're also lacking some more recent data, back when SKL-X launched there was some talk around here that UEFI updates helped improved the situation up to a point where it became significantly better in (gaming) benchmarks, possibly shifting the choke point towards L3 cache. The HEDT platform seems kinda forgotten by both Intel and reviewers, so I guess we'll have to wait a lot more before getting decent answers. (I have seen no Cascade Lake X latency measurements for example)

The best info we can still find is probably memory latency:
Memory-AIDA-Latency.jpg

With Broadwell-E with similar memory to give more insight:
mem3200.png
 
Last edited:

moinmoin

Diamond Member
Jun 1, 2017
4,432
6,777
136
The best info we can still find is probably memory latency:
memory-aida-latency-jpg.20686
Huh, how comes Threadripper 2970WX/2990WX do so well there, compared to all Zen chips but especially 2950X which fares nearly 50% worse? Did I miss something?
 
Apr 30, 2020
68
170
76
I love how people just make assertions like "So it has to be the memory subsystem!" - there are million things going on inside these CPUs, and unless you have all of the performance characterizing tools running (as in, watching the code as it executes on the processor) you have no idea where a bottleneck might be. There's too much nonsense going around about "latency" without any kind of performance characterizations backing those assertions up.

Plus, Zen OCing is sketchy at best. Remember the issues when Zen 2 first dropped, where people were doing massive undervolts and keeping the same clocks, but performance was dropping? The CPU was internally clock gating/stretching, so the performance was going down even though the apparent clock stayed the same. How do we know this 5 GHz OC isn't clock stretching either?
 
  • Like
Reactions: lobz

Gideon

Golden Member
Nov 27, 2007
1,534
3,251
136
I love how people just make assertions like "So it has to be the memory subsystem!" - there are million things going on inside these CPUs, and unless you have all of the performance characterizing tools running (as in, watching the code as it executes on the processor) you have no idea where a bottleneck might be. There's too much nonsense going around about "latency" without any kind of performance characterizations backing those assertions up.

Plus, Zen OCing is sketchy at best. Remember the issues when Zen 2 first dropped, where people were doing massive undervolts and keeping the same clocks, but performance was dropping? The CPU was internally clock gating/stretching, so the performance was going down even though the apparent clock stayed the same. How do we know this 5 GHz OC isn't clock stretching either?
It might be streching a bit, but it can't be a lot as the productivity benches improved a lot, only gaming didn't.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,439
3,358
136
I love how people just make assertions like "So it has to be the memory subsystem!" - there are million things going on inside these CPUs, and unless you have all of the performance characterizing tools running (as in, watching the code as it executes on the processor) you have no idea where a bottleneck might be. There's too much nonsense going around about "latency" without any kind of performance characterizations backing those assertions up.

Plus, Zen OCing is sketchy at best. Remember the issues when Zen 2 first dropped, where people were doing massive undervolts and keeping the same clocks, but performance was dropping? The CPU was internally clock gating/stretching, so the performance was going down even though the apparent clock stayed the same. How do we know this 5 GHz OC isn't clock stretching either?
From Anandtech running SPEC2017 on Renoir(https://www.anandtech.com/show/1576...4-review-swift-gets-swifter-with-ryzen-4000/3):
Renoir showcases the biggest increases in workloads such as 548.exchange2_r and 525.x264_r which are back-end execution bound workloads, and the microarchitectural improvements here help a lot.


On the other hand, the weakest improvements are seen in workloads such as 520.omnetpp_r – this test is mostly memory latency bound and unfortunately the new chip here barely just matches its predecessor. The same can be said about 505.mcf_r where the improvements are quite meager.
and
In SPECfp2017, these are floating point heavier test workloads. The generational increases here are also relatively smaller, with even an odd regression in 527.cam4_r. The Intel chip still has a lead across the board, and with particular large gaps in the more memory heavy workloads such as 519.lbm_r and 549.fotonik3d_r.
It is clear that the memory subsystem is a problem since SPEC2017 workloads are too big to fit in L3.
 

eek2121

Platinum Member
Aug 2, 2005
2,437
3,247
136
I am going to add in a few thoughts here.

  1. Mesh is the only known way forward for the time being. The software stack needs to be modified to take this into consideration.
  2. There will be architectural changes that help, including better cache implementations, however, nothing will be a “one size fits all” solution.
  3. Today it doesn’t matter much, but the i9-9900K, 10900K, and other chips will be hideously obsolete in 2-5 years.
EDIT: My 1950X in “game” mode could achieve sub 60ms latencies.
 

ASK THE COMMUNITY

TRENDING THREADS