Review Intel 10th Generation Comet Lake-S Review Thread

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Zucker2k

Golden Member
Feb 15, 2006
1,810
1,159
136
Review information on the soon-to-be-released 10th generation desktop lineup, as well as all relevant information will be linked in this thread. OP will be updated as information becomes available in the next few days. Please, post links to reputable sites you want to see in the OP, and I'll add them. Thanks!

Anandtech
Phoronix (Linux Benchmarks)
LTT (YouTube Video)
Gamers Nexus
Euro Gamer
ComputerBase.de
Back2Gaming
HWUB (YouTube Video)
Sweclockers
Nordic Hardware


Reviews Roundup on VideoCardz
 
Last edited:

Zucker2k

Golden Member
Feb 15, 2006
1,810
1,159
136
MY problem, is that they pick and choose. They use high power to get a good benchmark, and low power to show low power usage. You can't do both. If you get the performance out of this chip, it DOES take a lot of power. Nobody seems to acknowledge that. Nobody that likes Intel. And when it takes a lot of power, it takes a lot to cool it, as it is HOT.

The 125 watt power ? that would provide a very bad benchmark.
125w = PL1
250w = PL2 (Turbo) which even with Anandtech review was only achieved with a power virus, and more importantly, it was not a sustained draw. The rendering and encoding benchmarks did not hit 250w. Do you understand this?
As @IntelUser2000 pointed out, using peak power instead of average power to represent power draw is bogus because the chip is not drawing 250w in any of the benchmarks that matter.
 

DrMrLordX

Lifer
Apr 27, 2000
21,631
10,843
136
But, would the benchmarkers put it at PL2 (245 watt??) to show it in a better light ?

A few things:

1). Thanks to the 9900k, those of us who were paying attention now know more about PL1 and PL2 values, and reviewers are encouraged to show us what values the mobo OEMs have included for those power states, along with the relevant tau values.
2). Intel seems to been publishing more-realistic PL2 values.

So yeah some benchmarks are going to be run with MCE enabled, to try to hit that sweet 4.9 GHz in every workload. But the reviewers seem to be labeling the results as such (and telling us that power consumption will rise considerably in the process).

So what's your issue exactly

It's an old process with an old uarch. People (guess who?) are playing shell games to try to make it look like it gets top performance while using less power than "we thought it would". When 10c Comet Lake-S was announced, I guessed 4.7 GHz @ 210w, and instead we got 4.9 GHz @ 250w. You keep using the word "power virus" as if that somehow lets you handwave away benchmarks other CPUs are expected to run without displaying undesirable behavior.

If someone buys a 10900k and tries to fold on it, either they leave it stock and the power/clocks take a dump, or they enable MCE and it gets as close to PL2 for as long as it can.
 

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
If someone buys a 10900k and tries to fold on it

That's the problem: 99.9% of 10900k users will not render or fold. We are long past "enough multithreaded performance" for desktop tasks like gaming or web or compression. Browsing the web on 10900k will use measurably less power than x570 platform even if we allow for external GPUs for both.

Even if we move to the "workstation" tasks that benefit from strong MT performance like compilation, the fact remains that people spend 90+% of time in IDE where ST performance is more important than cutting some more time from compilation.


Lets get real, peak power usage prime95 style is irrelevant for 10900k users. What is horrible is that we get wokstation class and price power delivery platform coupled with 5 year old cores and IO setup from same heritage.

There has been zero innovation from Intel there, and things like 2.5-10gbit ethernet, proper usb 3, thunderbolt are looking increasingly ridiculous @ DMI speeds.

The world where Samsung external T7 USB drive takes 1/4th of DMI bandwidth on MB that has to deliver north of 250w is a joke.
 

misuspita

Senior member
Jul 15, 2006
401
452
136
That's the problem: 99.9% of 10900k users will not render or fold. We are long past "enough multithreaded performance" for desktop tasks like gaming or web or compression. Browsing the web on 10900k will use measurably less power than x570 platform even if we allow for external GPUs for both.

Even if we move to the "workstation" tasks that benefit from strong MT performance like compilation, the fact remains that people spend 90+% of time in IDE where ST performance is more important than cutting some more time from compilation.
So... what is your message? That people buy Intel top of the line mainstream CPU and do not use it to it's potential? For gaming they probably would be equally served with 10700k, or even 10600k. Or (gasp) 3900x.

But for productivity you need all those cores and frequency to go as fast as necesary to do the task at hand faster. And that would need PL2.
 

uzzi38

Platinum Member
Oct 16, 2019
2,632
5,950
146
4.9GHz @200 watts in Blender, not Prime 95 or Y-Cruncher.

View attachment 21522
It factually cannot be sustaining 4.9GHz. The maximum temperature for TVB operation has already been exceeded in this test. Hardware Unboxed likely noted down the frequency near the beginning of the test or simply didn't do it at all and assumed it was at 4.9GHz.


EDIT: Typo, meant temperature.
 
Last edited:

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
It factually cannot be sustaining 4.9GHz. The maximum frequency for TVB operation has already been exceeded in this test. Hardware Unboxed likely noted down the frequency near the beginning of the test or simply didn't do it at all and assumed it was at 4.9GHz.
You're trying to bring facts into what seems to be a religious debate.
 

Hitman928

Diamond Member
Apr 15, 2012
5,263
7,891
136
It factually cannot be sustaining 4.9GHz. The maximum frequency for TVB operation has already been exceeded in this test. Hardware Unboxed likely noted down the frequency near the beginning of the test or simply didn't do it at all and assumed it was at 4.9GHz.

A motherboard maker can use MCE to essentially do whatever they heck they want with frequency and duration, so I think it's accurate. I will say that even in blender, it is scene dependent how much power it will use and of course there are other real applications (not power virus) that will cause the CPU to use more power than blender.
 

DrMrLordX

Lifer
Apr 27, 2000
21,631
10,843
136
That's the problem: 99.9% of 10900k users will not render or fold.

@AtenRa and others have made the point that the extra two cores on the 10900k are mostly pointless. So yes, that's true!

Browsing the web on 10900k will use measurably less power than x570 platform even if we allow for external GPUs for both.

. . . which is why AMD didn't use Matisse for mobile applications, which is the only sector where x570's high idle power would actually matter.

Lets get real, peak power usage prime95 style is irrelevant for 10900k users.

. . . okay. That's not exactly a ringing endorsement of the 10900k.

What is horrible is that we get wokstation class and price power delivery platform coupled with 5 year old cores and IO setup from same heritage.

True. It's what comes from rehashing the same old tech over and over again.
 

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136





Dropping in a video without your own commentary is not allowed in the CPU or VC&G forum.


esquared
Anandtech Forum Director
.
 
Last edited by a moderator:
  • Like
Reactions: lightmanek

Zucker2k

Golden Member
Feb 15, 2006
1,810
1,159
136
@AtenRa

330W in CBR20, hahah
Configured with unlimited power (4095 watts? Jeez), 5.3GHz All-Core overclock, and 4133MHz RAM, running an AVX-2 based app. What did you expect?

I suggest you overclock your 3900x to 4.5GHz All-Core overclock and run the same benchmark and post your result here.

PS: You definitely have the cooling for it; hopefully your chip can handle that overclock? Shouldn't be difficult.
 

ondma

Platinum Member
Mar 18, 2018
2,721
1,281
136
@AtenRa

330W in CBR20, hahah
Yea, I run CBR20 all day. In any case, that was at 5.3 ghz; decreasing the overclock a couple hundred mhz would lower the power consumption considerably, as the reviewer mentioned.

Edit: you beat me to it Zucker, your comment posted while I was typing mine.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,795
3,626
136
10900K sustained power consumption is atrociously bad even when you're running actual useful AVX2 workloads.

hhakAayRyXRuoAVDo9PpAn-650-80.png
 

Zucker2k

Golden Member
Feb 15, 2006
1,810
1,159
136
10900K sustained power consumption is atrociously bad even when you're running actual useful AVX2 workloads.

hhakAayRyXRuoAVDo9PpAn-650-80.png
222w is turbo. It's perfectly fine because it only lasts for 56 seconds.
240w is a 5.1GHz All-Core Sustained load. I think it's impressive.

Unless, of course, you can change my mind with a Zen 2 processor with an 800MHz All-Core overclock on top of stock All-Core Turbo on AIO cooling. Never gonna happen.
 
  • Like
Reactions: pcp7

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
10900K sustained power consumption is atrociously bad even when you're running actual useful AVX2 workloads.

I think it is bad everywhere, using way more per core than AMD does on 7nm. But that's what you pay for best ST and desktop performance.
I've always found this forum fascination with folding, rendering and encoding and running completely detached from what most people do with their PC even in semi professional setting. Heck, even "casual" encoding is best left to that awesome Turing NVEnc that will beat in speed/power efficiency all those CPUs so hard that it is no longer funny.

330W in CBR20, hahah

That's what's bad with these forums, i bet some Intel fan will jump with:

But, but 10900K when oced has better Far Cry 5 minimum FPS than 3950x averages are!

Sadly both are irrelevant to real world use.
 

Hitman928

Diamond Member
Apr 15, 2012
5,263
7,891
136
222w is turbo. It's perfectly fine because it only lasts for 56 seconds.
240w is a 5.1GHz All-Core Sustained load. I think it's impressive.

Unless, of course, you can change my mind with a Zen 2 processor with an 800MHz All-Core overclock on top of stock All-Core Turbo on AIO cooling. Never gonna happen.

The problem is that even when using 240W, it just barely catches up to the 3900x in performance in well threaded programs.

wmKayxAr2cnZx3b2EMi6ui-650-80.png


Which ends up giving you a graph like this:

KL3My4bnPSSd7iUdwgPpKn-650-80.png


So why would the 3900x need to try to blow out it's power consumption reaching the last few hundred MHz when it can match or beat the 240W consuming 10900K in the vast majority of multi-threaded applications and do so using over 40% less power?
 

tamz_msc

Diamond Member
Jan 5, 2017
3,795
3,626
136
Actually in the Tom's Hardware review they do not use the recommended parameters due to the motherboard used for testing. So the >=200 W power consumption is for the entire duration of the runtime of the encode, during which the 10900K still loses to a 3900X by a few seconds.

10 Skylake cores at 4.8-4.9GHz still cannot beat 12 Zen 2 cores at 4-4.1GHz in a workload that scales with frequency at high core counts while consuming over 65% more power.

Doesn't seem very impressive to me.
 

Zucker2k

Golden Member
Feb 15, 2006
1,810
1,159
136
The problem is that even when using 240W, it just barely catches up to the 3900x in performance in well threaded programs.

wmKayxAr2cnZx3b2EMi6ui-650-80.png


Which ends up giving you a graph like this:

KL3My4bnPSSd7iUdwgPpKn-650-80.png


So why would the 3900x need to try to blow out it's power consumption reaching the last few hundred MHz when it can match or beat the 240W consuming 10900K in the vast majority of multi-threaded applications and do so using over 40% less power?
First of all, those two charts are not related. Secondly, unless the benchmark run for less than 56secs, which I doubt, the stock 10900k should ramp down to 125w, in which case all this chart shows is that pushing this chip to extremes is inefficient. Duh!

In any case, Intel is not going to win an efficiency test against AMD, as things stand. It's 7nm vs 14nm. Which is why I find it ridiculous that people keep posting the chips turboing or overclocked in order to place it in a bad light. Power consumption is measured as power draw over a period, not peak draw.

In terms of raw performance, look at how close the stock 10900k is to the 3900x, and how farther apart it is from the 3700x. That should tell you the extra power being drawn by the older and less efficient node is being put to rather good use. Won't you agree?
 
  • Haha
Reactions: spursindonesia

tamz_msc

Diamond Member
Jan 5, 2017
3,795
3,626
136
I've always found this forum fascination with folding, rendering and encoding and running completely detached from what most people do with their PC even in semi professional setting.
So nobody uses these CPUs in the real world at 100% load using vector instructions for sustained periods of time?
 
  • Like
Reactions: Tlh97

Zucker2k

Golden Member
Feb 15, 2006
1,810
1,159
136
So why would the 3900x need to try to blow out it's power consumption reaching the last few hundred MHz when it can match or beat the 240W consuming 10900K in the vast majority of multi-threaded applications and do so using over 40% less power?
The point is to push chips pass stock clocks while feeding them unlimited power to give us an idea about consumption. We see that all the time with Intel chips. How about AMD?
 

dmens

Platinum Member
Mar 18, 2005
2,271
917
136
That should tell you the extra power being drawn by the older and less efficient node is being put to rather good use. Won't you agree?
No, the truth is the opposite. The power drawn by 10-series to try to claw its way to performance parity is so high on the freq/voltage curve, it is a case of extremely diminished returns.

There is *zero* innovation required to crank voltage to hit higher performance. The artistry in CPU engineering is achieving performance without having to crank voltage.
 

Hitman928

Diamond Member
Apr 15, 2012
5,263
7,891
136
First of all, those two charts are not related. Secondly, unless the benchmark run for less than 56secs, which I doubt, the stock 10900k should ramp down to 125w, in which case all this chart shows is that pushing this chip to extremes is inefficient. Duh!

In any case, Intel is not going to win an efficiency test against AMD, as things stand. It's 7nm vs 14nm. Which is why I find it ridiculous that people keep posting the chips turboing or overclocked in order to place it in a bad light. Power consumption is measured as power draw over a period, not peak draw.

So your stance is basically that who cares about power consumption, performance is the only thing that matters and even if you lose in performance, if you can clock higher while performing less, it's more impressive? That's a very strange opinion to me but to each their own. We're not gonna find much common ground on the matter though.

In terms of raw performance, look at how close the stock 10900k is to the 3900x, and how farther apart it is from the 3700x. That should tell you the extra power being drawn by the older and less efficient node is being put to rather good use. Won't you agree?

The efficiency chart says the exact opposite. The extra power being used is by and large being wasted.