Discussion Comet Lake Intel's new Core i9-10900K runs at over 90C, even with liquid cooling TweakTown

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

piokos

Senior member
Nov 2, 2018
265
82
61
142W chips don't hit the 95C limit too easily.
Zen2 does because of what you said: dense architecture and hotspots. That's the whole point.

Let's just step back to a principle for a moment.
We have a Zen2 (7N) CPU pulling 140W and a Comet Lake (14m) CPU pulling 140W. We use the same cooler.
What I'm saying is: Zen2 will report higher temperatures. Do we converge on that? :)

And BTW: I'm not saying there are no hotspots in the Intel chip that are hotter than the reported temp. It is possible.
Yeah I keep forgetting that.
Thankfully I don't. Because you're extremely confident in your statements.
You can run one with Wraith Prism. It's still a 142W chip. That lame little 2-heatpipe cooler isn't quite enough, but it's funny watching it try.
Well, I did a quick search and it seems people tried doing that. There's just slight performance loss, but that means it's probably hitting 95*C.
If you have some good article about it (with temperatures and performance shown), post it.
But that's likely controlled testing, with good airflow (or open case) and relatively low ambient.

So for a more general use: in a case and with higher ambient, I'd probably follow AMD's recommendation.

Either way, if you think AMD's official statement is incorrect, you have to discuss it with them, not me. :)
In other words:
Oh yeah.

Just for the record: you can use a Wraith Prism-class air cooler on a 10900K as well. It will be loud under load and it'll cut some performance (I guess: ~20% in full all-core load), but it's totally feasible otherwise.
 
  • Love
Reactions: Zucker2k

piokos

Senior member
Nov 2, 2018
265
82
61
So how well does the Intel stock cooler work?
Intel stock cooler on a 3900X? Not at all - even if you adapt it.
And on Intel CPUs that it comes with, it's also spinning at 100% and also gets very loud. Performance-wise it's fine on i3 and i5. On i7 and i9 it'll lower boost clocks.
I never said it's better. But it is cheaper. And smaller. :)
 
Last edited:

vissarix

Senior member
Jun 12, 2015
276
83
101
How is this troll thread not deleted yet? Its clear misinformation and blatant lie...
Skip to 4:50 mark...

And with same cooler this i9 10900k runs cooler than all the Ryzen tested here, so much for the omg it runs HOT HOT HOT as said by the OP;)
 
  • Love
Reactions: Zucker2k

Zucker2k

Golden Member
Feb 15, 2006
1,053
454
136
How is this troll thread not deleted yet? Its clear misinformation and blatant lie...
Skip to 4:50 mark...

And with same cooler this i9 10900k runs cooler than all the Ryzen tested here, so much for the omg it runs HOT HOT HOT as said by the OP;)
Even with significant power consumption on top! Intel engineers deserve a pay raise hehe.

And I did say 3900X remains faster on average in that test set. And that Intel wins in particular tests. Not sure why you're repeating that. :)
The 3900x loses to the 10900k on average in non-gaming tests. I was really surprised how it was even dominating the 3950x in quite a number of multithreaded loads.
But when it comes to the more multi-threaded workloads commonly experienced on Linux, the Core i9 10900K was generally just running in between the Ryzen 9 3900X and Ryzen 9 3950X across the wide mix of workloads benchmarked:...

I appreciate your objectivity, wherever you are, lady/sir!

It does seem difficult for you to understand the implication of those temperatures. Those are hotspot temps.
So? A 105-tdp Zen 2 chip doesn't have to dissipate 144 watts in heavy multithreaded scenarios? And how do you tame all that heat and hotspots on top? Perhaps with even better cooling than normal, like AMD recommends?

Not sure why anyone's comparing the 10900k to the 3900x, when the total cost of acquiring a 10900k is much closer to that of the 3950x. $530 for the chip, $150 (or more) to cool it . . . meanwhile a 3950x costs $710 at NewEgg. With its own cooler.
Funny, the 3900x is now no longer a competitor to the 10900k? A 12 core monster of a chip that debuted at $500? Even if it's $410 now, you still need a decent cooler (don't make me call @Mopardude in here), more expensive RAM because "Ryzen loves fast RAM," and some form of video output. As for the 3950x, that's $710 + 280mm AIO + Expensive RAM + Some form of video output just to see your desktop.

3950X comes without a bundled cooler.
AMD recommends 280mm AiO. The same kind of cooling will also be sufficient on 10900K.
So it's still $710 for 3950X and $530 for 10900K ($510 for 10900KF).
Sounds fair enough to me.

Yeah I keep forgetting that.
Don't forget, I'll keep reminding you because 144 watts and heatspots need something more than a Wraith Prism to cool sufficiently, and that'll be water, according to AMD.
 
  • Like
Reactions: killster1

maddie

Diamond Member
Jul 18, 2010
3,065
1,685
136
So how well does the Intel stock cooler work?

I suppose, to be fair, it is utterly silent.
Intel stock cooler on a 3900X? Not at all - even if you adapt it.
And on Intel CPUs that it comes with, it's also spinning at 100% and also gets very loud. Performance-wise it's fine on i3 and i5. On i7 and i9 it'll lower boost clocks.
I never said it's better. But it is cheaper. And smaller. :)
I don't think you got the point.
 

Topweasel

Diamond Member
Oct 19, 2000
5,323
1,497
136
First of all: Phoronix isn't using Adobe in tests. So I assume you may have confused them with Puget Systems, right?
Anyway: yes, an Adobe-based benchmark will mix ST and MT tasks. Just like any good benchmark should. Just like actual user would.
Yes Puget not Phoronix. My bad. As far as ST and MT sure there should be a mix. But how should that be weighted, how is it weighted. Because the results a pretty ST heavy. That doesn't mean its userbenchmark like weighting. But I feel outside a few of the more regular functions that use case to use case is going to change heavily.
And of course that mix is representative just for some workflows and just for some users. No other way. But it's still checking performance is very popular software.
Furthermore, good reviews will also include results for particular components, so that - if you know what is more representative for your workflow - you can focus just on them.
Didn't know they broke it out quite like that. That's great actually. The problem is I don't like the the "Puget" tests that some reviewers put out where they run either a script or tool set that runs this test for them, because it hides information, including weighting, and assumes that previous experience with them equals current experience.
I don't know why you're trying to undermine this kind of testing. Do you think it makes less sense than Cinebench? :)
To some degree. As a whole package. As far looking at a CPU release. It's really not that hard to tell when and where it will suck or rock. Specially when you know the general weaknesses. I don't need a test with some mystery series different types of "real world use cases" outputing a number for me. That's how you get to where userbenchmark is now. I want to know what the general Filter performance is like, what the general export performance is like. And go from there. Someone else running the Puget suite test means nothing to me (well didn't till now. Not sure I can agree with their full test selection and again weighting, but at least their testing information is available there).

They affect the max boost if they hit the 95*C limit. That's the whole point of measuring temperature.
You're basically saying that high temperatures aren't a problem on Zen2, because it uses less power. But they somehow are a problem on Comet Lake, because there's more heat.
I mean: what's going on? :)
Both processors will throttle when they hit the limit. And when that happens, 3900X will pull 140W and 10900K will pull 250W.
Do you know what makes the 10900k special compared to even the 10900? Intel sands down the the silicon and uses a spreader that is thicker where the die is. This allows the die to have a better relationship with the spreader so that the spreader can do its job and feed heat to the HSF that much quicker. They need to do this to maximize the cooling available at flashy power usage to not roast the die too quickly.

Same thought process works with the cooling concerns of AMD's 7nm CCD's. In the end AMD's power management is spiky but even more than that one a Core goes active, do to the size of the die within an already dense and small CCD, Temps for that die shoot up. Momentarily. It will level back down a little bit because the cooler was able to wipe off that heat even if the core is still active. As a whole a 3900x and 3950x are very reasonable to cool. But the inefficiencies of a shrunk die, a super small core, not having direct die cooling means a Zen 2 CPU will always spike up in temp when CPU activity starts and that outside chilling, the is a minimum temp that you really won't ever get under when CPU is active. Think of this as Ivy Bridge with its toothpaste. The science behind it is actually pretty similar. But the point being an IB i7 wasn't any harder to cool, could clock as high, and didn't use any more power relatively then a SB. It just took longer for new power levels to the die to transfer through the paste and the paste to the HSF. Which meant the CPU just ran hotter at each point.

All that is crazy different that a trying to go from 140w, to cooling 200w, 300w, or as Phoronix picked up 400w.

In the end if you have the cooling and the CPU handles it fine then so be it. But all of this isn't to tell someone that purchased a 10900k that they made a bad choice. But its to go over the value of the CPU as whole compared it its competition. A large portion of that is going to be knowing what you need to cool it. If you are running stock. It's not much because that limit takes heatsoaking the cooler in and the same 140w cooler a 3900x or 3950x would feel comfortable in is fine. But then its not better than a 10700k in ST or a 3900x in MT. Want to turn of Tau, or turn on MCE. Want to run a MSI board or Gigabyte board at default settings. You want to make a push on the 3900x. Then you need to know what you need to cooler. That isn't one that would be heatsoaked at 200w, or 300w. Want one that can actually keep up with a CPU flashing up to 400w. That isn't a cheap cooler and it is an important part of the discussion to know what it will take to become competitive. That's not to say you always compare to a 3900x with no cooler. But it is important to know that it is also not needed and more of a quality of life decision. One you also need to compare to when picking a cooler out for the 10900k.
 

Hitman928

Platinum Member
Apr 15, 2012
2,450
1,489
136
How is this troll thread not deleted yet? Its clear misinformation and blatant lie...
Skip to 4:50 mark...

And with same cooler this i9 10900k runs cooler than all the Ryzen tested here, so much for the omg it runs HOT HOT HOT as said by the OP;)
It all depends on if you run 10900k enforcing intel's power guidelines or not. If you do (as shown in the video), it's not bad to cool and can be cooled pretty easily with decent AIO or good air cooler. You'll still hit the same single/lightly threaded frequencies as letting it go all out, but you'll lose 10 - 15% performance in applications that can really hit all cores. Competitively this puts the chip in a weird spot because if you just want the single/light threaded performance, you could of had that for a while now with the 9900K or you could also just get the 10700k, the 10900k doesn't really move the needle for intel here. If you want it for the full multi-threaded performance then you need to run it all out to try and catch up to the cheaper 3900x which then makes the 10900k a lot more difficult to cool and you need a minimum of a high quality 280 mm AIO to cool it. If you are hitting AVX code hard, even that might cause you to start running into thermal limitations. I'm not willing to say there's no place for the 10900k, but I don't see it attracting many customers given it's competitive positioning. If you want to go for the chip and think it fits your needs, great, go for it. Just come with a hefty cooler and set it for MCE if you really want to get it for its multi-threaded performance.
 
Last edited:

DrMrLordX

Lifer
Apr 27, 2000
15,494
4,279
136
Well, I did a quick search and it seems people tried doing that. There's just slight performance loss, but that means it's probably hitting 95*C.
It will reach the same peak temps as a 3900x in most cases. Same TDP, same transistor density. AMD's recommendations are for the enthusiasts who buy top-end CPUs (which is why they don't bundle a cooler with it) who might use PBO or otherwise overclock the chip.
 

Zucker2k

Golden Member
Feb 15, 2006
1,053
454
136
All that is crazy different that a trying to go from 140w, to cooling 200w, 300w, or as Phoronix picked up 400w.
400 Watts? Then how is it able to run 25% cooler than a 3900x which only consumes up to 144 watts? Surely, the laws of physics do apply to the 10900k?

1590179070284.png

1590179115340.png


And here:
1590180230496.png

4.9GHz ALL-CORE, AVX2, 84c.

Nuts!
 

moinmoin

Golden Member
Jun 1, 2017
1,568
1,425
106
I also think people are being a bit disingenuous on the power usage and heat concerns that sparked this thread to begin with. The 10900k behaves somewhat similarly to Zen 2 when operating under the recommended power limits. The lighter the load the more clock you get. I don't think its fair to criticize the CPU for being capable of being cooled while OPTIONALLY drawing a butt ton of power for higher clocks.
My biggest problem with Intel is that they rendered anything called "stock" meaningless. Stock TDP refers to the base frequency which Intel chips never run at. Stock turbo refers to some turbo behavior expecting sufficient cooling, something an Intel stock cooler if included actually never offered. Stock setting on motherboards have evolved into bleeding edge overclocking settings at default that have no relation anymore to what was originally referred to as stock behavior. And so on.

So Intel's chips are great for toying with settings and actually having the choice of tweaking a lot of details, and the 10th gen introduces even more tweakable parameters with TVB and per core switchable SMT. But I feel this all just obfuscates that these 10th gen Core chips in the end still are little more than an outdated insecure uarch on an outdated inefficient node.
 
  • Like
Reactions: Thunder 57 and Rigg

DrMrLordX

Lifer
Apr 27, 2000
15,494
4,279
136
@piokos


3950x tested with low-profile coolers (NH-l12s in particular). That's a 105W-rated cooler. Wraith Prism is at least rated for 142W. The two compared directly:


The 3950x definitely does not require water cooling, and it does not bounce off the 95C temp limit in something like Blender. The 10900k is an entirely different ball game.

So I go back to my original point. You do not need to spend lots of money on a hefty cooler for the 3950x. A spare Wraith Prism would suffice, making the 3950x a direct price competitor for a 10900k once cooling price is factored in. Wraith Prisms are free + shipping from any number of 3700x, 3800x, and 3900x owners who have ditched theirs.
 
  • Like
Reactions: Tlh97

Topweasel

Diamond Member
Oct 19, 2000
5,323
1,497
136
400 Watts? Then how is it able to run 25% cooler than a 3900x which only consumes up to 144 watts? Surely, the laws of physics do apply to the 10900k?
I wasn't the one that brought up 400w. I Assume a defualt Tua and no MCE the 10900k is going to run pretty close to the 3900x. Infact maybe cooler. Besides being less dense the work they put into the heat spreader connection to the die. As long as the cooler is fine with the 30 seconds of 200w usuage, then it should compare well to the 3900x temp wise.
 
  • Like
Reactions: ZGR

DrMrLordX

Lifer
Apr 27, 2000
15,494
4,279
136
@Zucker2k

3900x is faster in MT tests than 10900k and cheaper. If you want it to be the competitor then the 3900x wins where it needs to (MT workloads). Look at the encoding and rendering benchmarks in the AT review: 3900x wins 3 out of 4 render benchmarks (losing only POV-Ray) and wins Handbrake (2 out of 3). It also wins AES encryption, 7zip, Cinebench R20 MT, GB4 MT, 3DPM (with and without AVX), y-cruncher MT, and probably would have won NAMD had AT rememberd to include the results.

The reality is that Intel has made it so expensive to own and operate a 10900k that you can get a 3950x for the same amount of money. It's really quite pathetic.
 

Zucker2k

Golden Member
Feb 15, 2006
1,053
454
136
@Zucker2k

3900x is faster in MT tests than 10900k and cheaper. If you want it to be the competitor then the 3900x wins where it needs to (MT workloads). Look at the encoding and rendering benchmarks in the AT review: 3900x wins 3 out of 4 render benchmarks (losing only POV-Ray) and wins Handbrake (2 out of 3). It also wins AES encryption, 7zip, Cinebench R20 MT, GB4 MT, 3DPM (with and without AVX), y-cruncher MT, and probably would have won NAMD had AT rememberd to include the results.

The reality is that Intel has made it so expensive to own and operate a 10900k that you can get a 3950x for the same amount of money. It's really quite pathetic.
You know, you're right. Unless you're blasting away at full throttle and dumping heat into your room like a dragon, then a 10900k is faster? I'll take that. The 3950x with it's $750 MSRP at launch, and to a lesser degree, the $500 3900x really should be Threadripper 3950x and 3920x respectively. They are effectively masquerading as desktop chips and being embarrassed in typical desktop workloads like gaming, web-browsing, etc. by mid-range hexacores, and octocores.
 

piokos

Senior member
Nov 2, 2018
265
82
61
I don't think you got the point.
I don't think you get sarcasm...
Yes Puget not Phoronix. My bad. As far as ST and MT sure there should be a mix. But how should that be weighted, how is it weighted. Because the results a pretty ST heavy. That doesn't mean its userbenchmark like weighting. But I feel outside a few of the more regular functions that use case to use case is going to change heavily.
Of course.
And the most popular benchmark on gaming sites/forums - Cinebench - is perfectly parallel and almost completely ignores CPU latencies, which makes it a poor choice for a "universal" benchmark and extremely bad for games in particular.
And the second commonly used benchmark on gaming sites is - inevitably - Blender. :)
And how many AMD fans complain? :)
But when a review includes some Adobe programs, people basically grab pitchforks.
Didn't know they broke it out quite like that. That's great actually. The problem is I don't like the the "Puget" tests that some reviewers put out where they run either a script or tool set that runs this test for them, because it hides information, including weighting, and assumes that previous experience with them equals current experience.
I think weighting is usually given. Maybe not explicitly or hidden somewhere in the documentation.
Otherwise sure: some reviewers give you very little data. Well, you can always go to Phoronix for reference. Their testing is open-source. :)

Some tests can't be fully transparent because... well, because software isn't. You really don't know what's happening inside a game or even inside Cinebench. :)
Do you know what makes the 10900k special compared to even the 10900? Intel sands down the the silicon and uses a spreader that is thicker where the die is. This allows the die to have a better relationship with the spreader so that the spreader can do its job and feed heat to the HSF that much quicker. They need to do this to maximize the cooling available at flashy power usage to not roast the die too quickly.
I'm aware of that. What's bad about it? Yes, they did what they could to reduce temperature. It is important.
And on non-K SKUs it's probably a lot less relevant.
That said, I haven't seen any official info that this is true only for -K SKUs (and I don't think non-K chips have been already sent to reviewers).
Is it confirmed?
Want one that can actually keep up with a CPU flashing up to 400w.
I don't know why you keep repeating those 400W. This has nothing to do with how much heat the CPU generates.
 

DrMrLordX

Lifer
Apr 27, 2000
15,494
4,279
136
Unless you're blasting away at full throttle and dumping heat into your room like a dragon, then a 10900k is faster?
Bahahah

A 10900k is just a 9900k with two extra cores, and overclocked. For 56s. If you want that then go right ahead! Otherwise, it is amusing that anyone could extract such a bogus point from anything I said. Meanwhile, sane people will be waiting until (I guess) October.
 

Ajay

Diamond Member
Jan 8, 2001
6,821
2,197
136
Alder Lake does look strange, but it keeps showing up as Intel's Q4-ish desktop chip for 2021. If Alder Lake-S isn't Intel's desktop mainstay at that point, Intel will be stuck with Rocket Lake-S as their flagship desktop product until whenever they manage to launch Meteor Lake. And that might not happen until 2022 or 2023!
I suppose I keep hoping against hope that this is a red herring. Wasting die space on 8 small cores is insane for a high end desktop chip.
 

jpiniero

Diamond Member
Oct 1, 2010
7,681
1,050
126
I suppose I keep hoping against hope that this is a red herring. Wasting die space on 8 small cores is insane for a high end desktop chip.
The mainstream desktop has pretty much been derived from mobile. If mobile has it, so will desktop.
 

Topweasel

Diamond Member
Oct 19, 2000
5,323
1,497
136
Of course.
And the most popular benchmark on gaming sites/forums - Cinebench - is perfectly parallel and almost completely ignores CPU latencies, which makes it a poor choice for a "universal" benchmark and extremely bad for games in particular.
And the second commonly used benchmark on gaming sites is - inevitably - Blender. :)
And how many AMD fans complain? :)
But when a review includes some Adobe programs, people basically grab pitchforks.
I will never complain about the use of Adobe, or hell even Matlab while it was still broken. I don't mind Blender. I don't mind benchmarks where my preference in CPU doesn't do well. It doesn't help you or me to make the right purchase sticking my head in the sand. What I generally am not a big fan of are benchmarks (specially ones with unlisted functions) that "simulate" "average" workloads. Because generally that, the level of bias in the test suite is subjective and therefore at the mercy of the actual Bias's by the devloper.

I think weighting is usually given. Maybe not explicitly or hidden somewhere in the documentation.
Otherwise sure: some reviewers give you very little data. Well, you can always go to Phoronix for reference. Their testing is open-source. :)

Some tests can't be fully transparent because... well, because software isn't. You really don't know what's happening inside a game or even inside Cinebench. :)
If it can't be transparent about what its running, what it is testing for, and its weighting then it isn't a good benchmark. Better to take a tool and a task and compare that. Now its also true I don't know if there are any other tasks related the blender or Cinebench test that is hidden but I assume as benchmarks for actual usable tools release by the companies that develop them that its just a render. But I feel better about a Maxxon benchmark for their Cinema4D tool being an accurate representation of how these CPU's would perform in Cinema4D. Not a third party saying well the combination of this filter and this mix and this task plus this export, with this weighting between each task is, the "average" use case. That's not to discount Puget's numbers. Just not one of the tests I see people run that I put my personal weight into.
I'm aware of that. What's bad about it? Yes, they did what they could to reduce temperature. It is important.
And on non-K SKUs it's probably a lot less relevant.
That said, I haven't seen any official info that this is true only for -K SKUs (and I don't think non-K chips have been already sent to reviewers).
Is it confirmed?
All the techtubers that brought it specified it was for the K skus and maybe even just it and the 10700k. Not "official" perse. But I assume they got the info from Intel since well they would be the only ones to know what they did for the spreader. I don't have an issue with it and honestly it's a great idea. Below I will show how it smooths off some of the rough edges of it power usage but doesn't help the way you would think.

I was bringing it up because its not about reducing temperature. Well it is but it isn't this goes back my points back earlier in the thread. It's not about heat or "hot". Or any of that funky jaz. It allows the transfer of heat to happen quicker. Which is good. It means that with the same cooler at the same power level on a chip they didn't do the work on inevitably the sanded one with the thicker plate will have lower temps. Which is fine and dandy. What it doesn't do is make the cooler any better. It helps prevent hot spotting. What it doesn't do is make a cooler better. What it does mean is power and heat levels that might not get to the cooler quick enough on a 9900k, might work on a 10900k. But at the end of the day a cooler rated for 140w, will still heat soak at that point, causing a thermal runaway which they CPU's prevent by throttling if you are running any more than that. Which means until the CPU is done with its task you are at the thermal limit of the CPU.

Lets take a 3900x and a 10900k and a 9900k on a H100i. I don't know what the thermal limit is and my main point in regards to all of these CPU's was getting the same quality of life (noise level) so I am going to set the TDP headroom at 160w synthesize the same result while talking about it.

3900x Hot spots during low core usage bringing just about any heavy core usage to ~60c. While a core is in use nothing short of chilling will get the core to run cooler because of limits of transfer. As core count goes up power increases till about 8-10 cores. Where it reaches an upper limit of 140w (give or take a few). The package cooling is fine with for the most part as core usage temps of the CPU don't go up (well not dramatically). We aren't closing in on the thermal limit of the CPU each core is creating its own hotspot. But the package on its own is keeping up. Till you get up to the 140w limit. As you close in package and core temps start to equalize and you can find yourself getting toasty dependent on the cooler. At that 140w. You are nearing but are under the thermal limit of the cooler. Meaning that the CPU never gets into throttling territory.

9900k. With MCE on and no Tau as a lot of reviewers tested the CPU regularly ran at load at 160w. So for this test the 9900k had the same characteristics for thermal transfer as the 3900x but a less dense larger contact area to transfer to the the CPU heatspreader. That means assuming everything else matches the 9900k is going to probably be as close if not lower at a given power level as the 3900x. As it exceeds the 140w of the 3900x it will then start to exceed the 3900x in temp. When it hits the 160w its at thermal limit of the cooler, meaning everything equal the two will equal out and the CPU with be at its thermal limit. Probably able to adjust slightly here or there to maintain performance but basically always at thermal limit.

10900k. At defualt. So 30s Tau and no MCE. The 10900k can run at roughly 200w for about 30 seconds assuming everything else is fine. One of my questions was how could the keep the temps sub 65c to manage any TVB. The answer is in the sanding. Again assuming everything equal, while the cooler isn't heatsoaked at its limit at every power level the CPU will stay cooler and will rebound quicker when power usage drops. But that doesn't change the limits of cooling. So at the defualt settings the idea is as it shoots up to 200w of usage for that 30s the cooler would take time to get to the thermal runaway. During that time it runs super high. Then over corrects and bounces all the way down to the 125w rating. This gives it the picture of being pretty energy efficient. It isn't but that's because they know they have submitted so much heat to the cooler, staying anywhere between the TDP rating and the 200w would possibly keep the thermal runaway going specially since you don't know if they are using a 120/140/160/200w cooler. The new transfer technique with the sanding means that it doesn't hot spot itself into the thermal limits of the CPU in the mean time. Once the cooler has rebounded from dropping to 125w it will then again be cooler than a 3900x and probably a 9900k at that power usage. From there though problems get bad real quick. You disable tua like people did on the 9900k. Well then you need a cooler that can take 200w. So the this H100i (again rated at noise level) won't work. If you then enable MCE 300-330w. Meaning you need a cooler twice as good as the H100i (at the rated noise level).

The 10900k handles power usage better then the 9900k with its quicker better transfer to the HSF fan. So that when everything and I mean everything is equal. Temps are lower on it. What it doesn't do is make a cooler better then it is.

I don't know why you keep repeating those 400W. This has nothing to do with how much heat the CPU generates.
Dang should have responded to this before I wrote up everything else. If you don't don't how wrong you are, everything else probably doesn't make any sense.
 

Makaveli

Diamond Member
Feb 8, 2002
4,022
76
91
I will never complain about the use of Adobe, or hell even Matlab while it was still broken. I don't mind Blender. I don't mind benchmarks where my preference in CPU doesn't do well. It doesn't help you or me to make the right purchase sticking my head in the sand. What I generally am not a big fan of are benchmarks (specially ones with unlisted functions) that "simulate" "average" workloads. Because generally that, the level of bias in the test suite is subjective and therefore at the mercy of the actual Bias's by the devloper.

If it can't be transparent about what its running, what it is testing for, and its weighting then it isn't a good benchmark. Better to take a tool and a task and compare that. Now its also true I don't know if there are any other tasks related the blender or Cinebench test that is hidden but I assume as benchmarks for actual usable tools release by the companies that develop them that its just a render. But I feel better about a Maxxon benchmark for their Cinema4D tool being an accurate representation of how these CPU's would perform in Cinema4D. Not a third party saying well the combination of this filter and this mix and this task plus this export, with this weighting between each task is, the "average" use case. That's not to discount Puget's numbers. Just not one of the tests I see people run that I put my personal weight into.
All the techtubers that brought it specified it was for the K skus and maybe even just it and the 10700k. Not "official" perse. But I assume they got the info from Intel since well they would be the only ones to know what they did for the spreader. I don't have an issue with it and honestly it's a great idea. Below I will show how it smooths off some of the rough edges of it power usage but doesn't help the way you would think.

I was bringing it up because its not about reducing temperature. Well it is but it isn't this goes back my points back earlier in the thread. It's not about heat or "hot". Or any of that funky jaz. It allows the transfer of heat to happen quicker. Which is good. It means that with the same cooler at the same power level on a chip they didn't do the work on inevitably the sanded one with the thicker plate will have lower temps. Which is fine and dandy. What it doesn't do is make the cooler any better. It helps prevent hot spotting. What it doesn't do is make a cooler better. What it does mean is power and heat levels that might not get to the cooler quick enough on a 9900k, might work on a 10900k. But at the end of the day a cooler rated for 140w, will still heat soak at that point, causing a thermal runaway which they CPU's prevent by throttling if you are running any more than that. Which means until the CPU is done with its task you are at the thermal limit of the CPU.

Lets take a 3900x and a 10900k and a 9900k on a H100i. I don't know what the thermal limit is and my main point in regards to all of these CPU's was getting the same quality of life (noise level) so I am going to set the TDP headroom at 160w synthesize the same result while talking about it.

3900x Hot spots during low core usage bringing just about any heavy core usage to ~60c. While a core is in use nothing short of chilling will get the core to run cooler because of limits of transfer. As core count goes up power increases till about 8-10 cores. Where it reaches an upper limit of 140w (give or take a few). The package cooling is fine with for the most part as core usage temps of the CPU don't go up (well not dramatically). We aren't closing in on the thermal limit of the CPU each core is creating its own hotspot. But the package on its own is keeping up. Till you get up to the 140w limit. As you close in package and core temps start to equalize and you can find yourself getting toasty dependent on the cooler. At that 140w. You are nearing but are under the thermal limit of the cooler. Meaning that the CPU never gets into throttling territory.

9900k. With MCE on and no Tau as a lot of reviewers tested the CPU regularly ran at load at 160w. So for this test the 9900k had the same characteristics for thermal transfer as the 3900x but a less dense larger contact area to transfer to the the CPU heatspreader. That means assuming everything else matches the 9900k is going to probably be as close if not lower at a given power level as the 3900x. As it exceeds the 140w of the 3900x it will then start to exceed the 3900x in temp. When it hits the 160w its at thermal limit of the cooler, meaning everything equal the two will equal out and the CPU with be at its thermal limit. Probably able to adjust slightly here or there to maintain performance but basically always at thermal limit.

10900k. At defualt. So 30s Tau and no MCE. The 10900k can run at roughly 200w for about 30 seconds assuming everything else is fine. One of my questions was how could the keep the temps sub 65c to manage any TVB. The answer is in the sanding. Again assuming everything equal, while the cooler isn't heatsoaked at its limit at every power level the CPU will stay cooler and will rebound quicker when power usage drops. But that doesn't change the limits of cooling. So at the defualt settings the idea is as it shoots up to 200w of usage for that 30s the cooler would take time to get to the thermal runaway. During that time it runs super high. Then over corrects and bounces all the way down to the 125w rating. This gives it the picture of being pretty energy efficient. It isn't but that's because they know they have submitted so much heat to the cooler, staying anywhere between the TDP rating and the 200w would possibly keep the thermal runaway going specially since you don't know if they are using a 120/140/160/200w cooler. The new transfer technique with the sanding means that it doesn't hot spot itself into the thermal limits of the CPU in the mean time. Once the cooler has rebounded from dropping to 125w it will then again be cooler than a 3900x and probably a 9900k at that power usage. From there though problems get bad real quick. You disable tua like people did on the 9900k. Well then you need a cooler that can take 200w. So the this H100i (again rated at noise level) won't work. If you then enable MCE 300-330w. Meaning you need a cooler twice as good as the H100i (at the rated noise level).

The 10900k handles power usage better then the 9900k with its quicker better transfer to the HSF fan. So that when everything and I mean everything is equal. Temps are lower on it. What it doesn't do is make a cooler better then it is.


Dang should have responded to this before I wrote up everything else. If you don't don't how wrong you are, everything else probably doesn't make any sense.
lol he just keeps digging down the rabbit hole. Obvious who the Team Blue guys in the thread it has been enteraining watching everyone break down their post.
 

Zucker2k

Golden Member
Feb 15, 2006
1,053
454
136
Dang should have responded to this before I wrote up everything else. If you don't don't how wrong you are, everything else probably doesn't make any sense.
He already explained but you're probably skimming through his remarks. That 380w the reviewer mentioned was NOT a sustained draw.
 

Rigg

Member
May 6, 2020
89
161
61
It all depends on if you run 10900k enforcing intel's power guidelines or not. If you do (as shown in the video), it's not bad to cool and can be cooled pretty easily with decent AIO or good air cooler. You'll still hit the same single/lightly threaded frequencies as letting it go all out, but you'll lose 10 - 15% performance in applications that can really hit all cores. Competitively this puts the chip in a weird spot because if you just want the single/light threaded performance, you could of had that for a while now with the 9900K or you could also just get the 10700k, the 10900k doesn't really move the needle for intel here. If you want it for the full multi-threaded performance then you need to run it all out to try and catch up to the cheaper 3900x which then makes the 10900k a lot more difficult to cool and you need a minimum of a high quality 280 mm AIO to cool it. If you are hitting AVX code hard, even that might cause you to start running into thermal limitations. I'm not willing to say there's no place for the 10900k, but I don't see it attracting many customers given it's competitive positioning. If you want to go for the chip and think it fits your needs, great, go for it. Just come with a hefty cooler and set it for MCE if you really want to get it for its multi-threaded performance.
This guy gets it. The CPU's are actually pretty good. The price and the platform cost still sucks. It's priced better than 9th gen but still worse than AMD. The pricing and lack of direct core/thread competitor for the 10900k puts it in a weird place in the market. The only CPU in the whole lineup that piqued my interest enough to consider building a system around it is the 10700f. It should sell for $325 or so and run fine on a low end z490 or a good h470. That should be pretty price competitive (unless you're near a micro center) with a 3700x if they don't drop the price.
 
Last edited:

Markfw

CPU Moderator, VC&G Moderator, Elite Member
Super Moderator
May 16, 2002
20,137
7,635
136
I am in the hospital recovering from surgery, so I will keep this short and sweet. Since the Intel advocates will argue every point and make this CPU look good, I will not reply to backup my arguments, many peopl, and the Anandtech review and Phoronix review are my sources. This CPU is just a 9900k with 2 cores added. I takes up to 380 watts when fully engaged, and to do that you need custom water. For benchmarks, like the 9900k, wil eeks out a few percent over Ryzen, but for productivity even a 3900x beats it MOST of the time And the total cost is is more than a 3950x. Sp iy barely beats a 9900k in games, and runs hotter, and gets beat at most everything else by a 3900x, so why buy it ? Its simply a bad release
 

ASK THE COMMUNITY