7700K vs 7800X Techspot review

biostud

Lifer
Feb 27, 2003
18,236
4,755
136
https://www.techspot.com/review/1445-core-i7-7800x-vs-7700k/

AverageSlide.png


Obviously it is 1080p and will not matter much in 4K, but that is some really bad numbers for the 7800X.
 

dullard

Elite Member
May 21, 2001
25,054
3,408
126
How many times does it have to be said: gaming does not need (and will not need in the foreseeable future) high core counts.
 

biostud

Lifer
Feb 27, 2003
18,236
4,755
136
yeah, but if it was just a little less performance all round, but it is pretty significant in some games, and the X99 CPU's never showed this behavior.

Even when O/C within 200Mhz of each other the 7700K is roughly 30% faster (worst case scenario) than the 7800X. My guess would be that the X99 platform would perform better than the 7800X.
FC.png
 
Last edited:

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
How many times does it have to be said: gaming does not need (and will not need in the foreseeable future) high core counts.
It will be said right up until the Coffee Lake 6 core chip does very well in games. :D
It's basically an improved 7700K with 2 more cores.
 
  • Like
Reactions: Drazick and Phynaz

biostud

Lifer
Feb 27, 2003
18,236
4,755
136
From the conclusion

That doesn't really explain why the 7800X was just flat out slow by comparison for quite a few of the games tested. The likely reason for this is down to Intel restructuring the cache hierarchy. Compared to the 7700K, the 7800X has quadrupled the L2 cache per core while the shared L3 has been reduced by just over 30% per core. It's believed these changes combined with the way this new cache works makes Skylake-X more suited for server-related tasks and less efficient when it comes to things such as gaming, and that's certainly what we're seeing here.
 

dullard

Elite Member
May 21, 2001
25,054
3,408
126
It will be said right up until the Coffee Lake 6 core chip does very well in games. :D
It's basically an improved 7700K with 2 more cores.
A 7700k with two additional (mostly unused) cores will perform basically the same as a 7700k. Yes, that is correct--Coffee Lake should do quite well. But that isn't due to the core count.

There just is not a way for programmers to easily divide typical games into more than about 4 heavy tasks except for maybe in a few edge cases.
 

dullard

Elite Member
May 21, 2001
25,054
3,408
126
yeah, but if it was just a little less performance all round, but it is pretty significant in some games, and the X99 CPU's never showed this behavior.
The other X99 CPUs showed that more cores does not help. Even a highly overclocked 6900k could barely keep up with a stock 7700k in games. The 7800x is more cores (doesn't help) + cache changes (hurts for games, especially unoptimized ones).

This is a workstation/server chip where Intel sacrificed significant single processor and gaming performance in exchange for workstation/server performance.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
A 7700k with two additional (mostly unused) cores will perform basically the same as a 7700k. Yes, that is correct--Coffee Lake should do quite well. But that isn't due to the core count.

There just is not a way for programmers to easily divide typical games into more than about 4 heavy tasks except for maybe in a few edge cases.

That's just true. There are dozens of really big name games that have demonstrated the ability to span more than 4 cores. There were 3 major problems that plagued the Ryzen launch that really hurt its performance, but core scaling in games does exist and the 7700 is sitting on a precipice. It's the absolutely fastest ST CPU out there in both production (IPC) and in speed (clock) which helps it maintain performance even against CPU's that have more cores in games that utilize more cores. Mostly because while they can thread out their work, the actual workload tends to be optimized for the Hierarchy of the 4c8t solution. But in those games the 7700 is at the end of it's rope with CPU usage even on GPU bottleneck setups (1440 and 4k) sitting at 90%+ CPU usage. Coffeelake is going to fly, Nvidia is going to expand their DX12 driver optimization for the 6c12t setup and generally you will see games expand outside the old i7 setup into the new one.

For 7700 fans the 8700 is just going to be by miles a better chip. As the clocks on the 7800-7900x have shown. Coffeelake can and will probably keep clocks decently close to the 7700 which means that in classic games it's going to be just as good and on anything released late 2016 or later CoffeeLake will run away from it.
 

dwade

Junior Member
Jun 25, 2017
19
24
36
It is a BIOS driver issue. Most competent reviewers including the one on this website are saying that before jumping into conclusions. Most Youtubers don't even know what they're talking about.

PCGamer's ongoing test with constant BIOS update:
8dwBdxpxwEFbpWqEeu7tm-650-80.png


Virtually no regressed performance in gaming after running the latest BIOS driver. Tweaktown have the same findings as well.
 

ZGR

Platinum Member
Oct 26, 2012
2,052
656
136
Games may not need more cores, but if you want to stream at 5k 1080p60 and record gameplay at 4k60 a quad core can't do it.

My i7-5775C can handle 720p60 streaming while recording 4k60 at 100% CPU usage without frame drops somehow.... But going to 1080p60 kills the stream.

Such a shame that the revised L2 for SKY-X isn't a benefit in games, although it does make sense. Here's to hoping for a 6 core Coffee Lake with 128MB of L4!
 

Insomniator

Diamond Member
Oct 23, 2002
6,294
171
106
It is a BIOS driver issue. Most competent reviewers including the one on this website are saying that before jumping into conclusions. Most Youtubers don't even know what they're talking about.

PCGamer's ongoing test with constant BIOS update:
8dwBdxpxwEFbpWqEeu7tm-650-80.png


Virtually no regressed performance in gaming after running the latest BIOS driver. Tweaktown have the same findings as well.

If this is indeed the case, $390 for 6, 4.7GHZ cores is prettttttty nice. Wish the boards would get cheaper!
 

TheELF

Diamond Member
Dec 22, 2012
3,973
730
126
Mostly because while they can thread out their work, the actual workload tends to be optimized for the Hierarchy of the 4c8t solution.
The actual workload is optimized for the architecture of the consoles,consoles have one quad completely free for the game so (up to) 4 heavy threads and one quad that runs the OS and leaves 2 threads for additional game threads those threads will always be fighting for resources with the OS threads so those threads are perfect for hyperthreading due to this.

But in those games the 7700 is at the end of it's rope with CPU usage even on GPU bottleneck setups (1440 and 4k) sitting at 90%+ CPU usage
That just means that the 7700 could be running ~10% faster with proper optimization.
 

dullard

Elite Member
May 21, 2001
25,054
3,408
126
Games may not need more cores, but if you want to stream at 5k 1080p60 and record gameplay at 4k60 a quad core can't do it.

My i7-5775C can handle 720p60 streaming while recording 4k60 at 100% CPU usage without frame drops somehow.... But going to 1080p60 kills the stream.

Such a shame that the revised L2 for SKY-X isn't a benefit in games, although it does make sense. Here's to hoping for a 6 core Coffee Lake with 128MB of L4!
That isn't gaming. That is content creation, while gaming on the excess cores.

My point being that the gameplay programming itself does not really easily break down into more than four intense CPU threads based on how games actually function. The player may or may not be doing anything. There may or may not be NPCs in the room. There may or may not be need for AI. There may or may not be physics going on. Etc. It is extremely easy to program where you just throw each task to one thread. But, if that task isn't going on (there is no explosion for the explosion thread) then that thread and core sit idle. Dynamically moving threads to various cores based on what random things the user (or other users if online) may be doing is not an easy programming task. Thus, that type of programming optimization is often overlooked for bigger gains (such as getting the product shipped to customers).

Instead, they often just fall back on one core per main task, which ends up being about 4 cores needed. For example, it would be easy to program a game with these threads (a) user, (b) map/server, (c) NPCs/AI, and (d) physics/explosions. Then just spend your programming time making a game worth playing.
 
Last edited:

TheELF

Diamond Member
Dec 22, 2012
3,973
730
126
My point being that the gameplay programming itself does not really easily break down into more than four intense CPU threads based on how games actually function. The player may or may not be doing anything. There may or may not be NPCs in the room. There may or may not be need for AI. There may or may not be physics going on. Etc. It is extremely easy to program where you just throw each task to one thread. But, if that task isn't going on (there is no explosion for the explosion thread) then that thread and core sit idle. Dynamically moving threads to various cores based on what random things the user (or other users if online) may be doing is not an easy programming task. Thus, that type of programming optimization is often overlooked for bigger gains (such as getting the product shipped to customers).
Not how tings works...
Every thread is always running,the "explosion thread" from your example would be one of those running at 100% idle with zero cpu cycles since the last measurement in the pic in the spoiler.
Dynamically moving threads to various cores based on what random things the user (or other users if online) may be doing is being done 100% by windows task manager which is re shuffling all threads in the thread pool at every cycle to a new core,that is one of the main problems for new architectures as seen with FX and now Ryzen or even intel's hyperthreading still to this day,where windows task manager doesn't know the lay out of the available cores and sends threads to sub optimal logical or real cores.
6cAtXQz.jpg
Instead, they often just fall back on one core per main task, which ends up being about 4 cores needed. For example, it would be easy to program a game with these threads (a) user, (b) map/server, (c) NPCs/AI, and (d) physics/explosions. Then just spend your programming time making a game worth playing.
No what you see is the main task (which is the rendering of the scene) being split up over all available cores (because of consoles, all available cores are the 4 cores of the one of the two quads of the consoles that is not running the os) some games can split that work up over more then 4 cores but it's not that many.

An easily reproduced example to see this is with DOOM where you can use jobs_numthreads in the game's console to change the number of threads that do "all the work" on the fly.
 

dullard

Elite Member
May 21, 2001
25,054
3,408
126
Not how tings works...
Then why did you just back up what I said with all that you posted?

Throwing a bunch of threads at it and letting Windows guess is lazy, bad programming. But it gets the job done. It also ends up with about 4 heavy tasks at most.

To do it right, the programmer, not Windows, needs to do the heavy lifting. It just isn't done very often.
 

noneis

Junior Member
Mar 4, 2017
21
29
91
Gaming performance is not that surprising.

1.) Skylake-X has higher cache latencies compared to 7700K
2.) Skylake-X architecture memory bandwidth scales with core count. It has less than half bandwidth on single thread compared to Ryzen and 50%+ more bandwidth for 4 cores.
  • Single Ryzen Core has full access to 2 memory channels, same as 16 threads on die.
  • Single Skylake-X Core has full access to one memory channel only. Two cores to two channels, etc.
  • It has 34% less memory bandwidth for single core with faster memory (2666 vs 2400) compared to Broadwell
  • Games are often memory bandwidth bottlenecked on single thread not across multiple threads.
Edit: 2. explains why overclocking yielded no performance gain for Skylake-X in same games - single thread memory bandwidth bottleneck
 
Last edited:

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
Has anyone done a head to head of the 5820k vs 7800x? Since the broadwell X99 chips were largely forgettable a lot of folks like me stayed on Haswell X99
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Since the broadwell X99 chips were largely forgettable a lot of folks like me stayed on Haswell X99

I'd hardly call it forgettable. Broadwell-E has a 5-10% increase in IPC over Haswell-E, significantly less power usage, and Intel introduced their first true DDR4 memory controller with Broadwell-E which lacked the performance bugs with Haswell-E concerning write performance.

Not bad for a tock. I can tell you that my 6900K is noticeably faster than my 5930K for even regular day to day simple desktop stuff.
 
  • Like
Reactions: pcp7

TheGiant

Senior member
Jun 12, 2017
748
353
106
It is a BIOS driver issue. Most competent reviewers including the one on this website are saying that before jumping into conclusions. Most Youtubers don't even know what they're talking about.

PCGamer's ongoing test with constant BIOS update:
8dwBdxpxwEFbpWqEeu7tm-650-80.png


Virtually no regressed performance in gaming after running the latest BIOS driver. Tweaktown have the same findings as well.

The min fps or the first 1 percentile is what matters the most. There the SKL-X fails according to benches. The Average is not so bad.
 
  • Like
Reactions: Grazick

TheGiant

Senior member
Jun 12, 2017
748
353
106
You probably won't get that. Run fast enough DDR4 and you might not miss it. Though that l4 sure is nice for "average Joe" performance.
That is what all are saying- fast enough RAM is the same as the L4. Yet, the 5775C can reach gaming performance that no KBL with even 4000MHz RAM can do with those CPU clocks.

Why is that?