Of course.
And the most popular benchmark on gaming sites/forums - Cinebench - is perfectly parallel and almost completely ignores CPU latencies, which makes it a poor choice for a "universal" benchmark and extremely bad for games in particular.
And the second commonly used benchmark on gaming sites is - inevitably - Blender.
And how many AMD fans complain?
But when a review includes some Adobe programs, people basically grab pitchforks.
I will never complain about the use of Adobe, or hell even Matlab while it was still broken. I don't mind Blender. I don't mind benchmarks where my preference in CPU doesn't do well. It doesn't help you or me to make the right purchase sticking my head in the sand. What I generally am not a big fan of are benchmarks (specially ones with unlisted functions) that "simulate" "average" workloads. Because generally that, the level of bias in the test suite is subjective and therefore at the mercy of the actual Bias's by the devloper.
I think weighting is usually given. Maybe not explicitly or hidden somewhere in the documentation.
Otherwise sure: some reviewers give you very little data. Well, you can always go to Phoronix for reference. Their testing is open-source.
Some tests can't be fully transparent because... well, because software isn't. You really don't know what's happening inside a game or even inside Cinebench.
If it can't be transparent about what its running, what it is testing for, and its weighting then it isn't a good benchmark. Better to take a tool and a task and compare that. Now its also true I don't know if there are any other tasks related the blender or Cinebench test that is hidden but I assume as benchmarks for actual usable tools release by the companies that develop them that its just a render. But I feel better about a Maxxon benchmark for their Cinema4D tool being an accurate representation of how these CPU's would perform in Cinema4D. Not a third party saying well the combination of this filter and this mix and this task plus this export, with this weighting between each task is, the "average" use case. That's not to discount Puget's numbers. Just not one of the tests I see people run that I put my personal weight into.
I'm aware of that. What's bad about it? Yes, they did what they could to reduce temperature. It is important.
And on non-K SKUs it's probably a lot less relevant.
That said, I haven't seen any official info that this is true only for -K SKUs (and I don't think non-K chips have been already sent to reviewers).
Is it confirmed?
All the techtubers that brought it specified it was for the K skus and maybe even just it and the 10700k. Not "official" perse. But I assume they got the info from Intel since well they would be the only ones to know what they did for the spreader. I don't have an issue with it and honestly it's a great idea. Below I will show how it smooths off some of the rough edges of it power usage but doesn't help the way you would think.
I was bringing it up because its not about reducing temperature. Well it is but it isn't this goes back my points back earlier in the thread. It's not about heat or "hot". Or any of that funky jaz. It allows the transfer of heat to happen quicker. Which is good. It means that with the same cooler at the same power level on a chip they didn't do the work on inevitably the sanded one with the thicker plate will have lower temps. Which is fine and dandy. What it doesn't do is make the cooler any better. It helps prevent hot spotting. What it doesn't do is make a cooler better. What it does mean is power and heat levels that might not get to the cooler quick enough on a 9900k, might work on a 10900k. But at the end of the day a cooler rated for 140w, will still heat soak at that point, causing a thermal runaway which they CPU's prevent by throttling if you are running any more than that. Which means until the CPU is done with its task you are at the thermal limit of the CPU.
Lets take a 3900x and a 10900k and a 9900k on a H100i. I don't know what the thermal limit is and my main point in regards to all of these CPU's was getting the same quality of life (noise level) so I am going to set the TDP headroom at 160w synthesize the same result while talking about it.
3900x Hot spots during low core usage bringing just about any heavy core usage to ~60c. While a core is in use nothing short of chilling will get the core to run cooler because of limits of transfer. As core count goes up power increases till about 8-10 cores. Where it reaches an upper limit of 140w (give or take a few). The package cooling is fine with for the most part as core usage temps of the CPU don't go up (well not dramatically). We aren't closing in on the thermal limit of the CPU each core is creating its own hotspot. But the package on its own is keeping up. Till you get up to the 140w limit. As you close in package and core temps start to equalize and you can find yourself getting toasty dependent on the cooler. At that 140w. You are nearing but are under the thermal limit of the cooler. Meaning that the CPU never gets into throttling territory.
9900k. With MCE on and no Tau as a lot of reviewers tested the CPU regularly ran at load at 160w. So for this test the 9900k had the same characteristics for thermal transfer as the 3900x but a less dense larger contact area to transfer to the the CPU heatspreader. That means assuming everything else matches the 9900k is going to probably be as close if not lower at a given power level as the 3900x. As it exceeds the 140w of the 3900x it will then start to exceed the 3900x in temp. When it hits the 160w its at thermal limit of the cooler, meaning everything equal the two will equal out and the CPU with be at its thermal limit. Probably able to adjust slightly here or there to maintain performance but basically always at thermal limit.
10900k. At defualt. So 30s Tau and no MCE. The 10900k can run at roughly 200w for about 30 seconds assuming everything else is fine. One of my questions was how could the keep the temps sub 65c to manage any TVB. The answer is in the sanding. Again assuming everything equal, while the cooler isn't heatsoaked at its limit at every power level the CPU will stay cooler and will rebound quicker when power usage drops. But that doesn't change the limits of cooling. So at the defualt settings the idea is as it shoots up to 200w of usage for that 30s the cooler would take time to get to the thermal runaway. During that time it runs super high. Then over corrects and bounces all the way down to the 125w rating. This gives it the picture of being pretty energy efficient. It isn't but that's because they know they have submitted so much heat to the cooler, staying anywhere between the TDP rating and the 200w would possibly keep the thermal runaway going specially since you don't know if they are using a 120/140/160/200w cooler. The new transfer technique with the sanding means that it doesn't hot spot itself into the thermal limits of the CPU in the mean time. Once the cooler has rebounded from dropping to 125w it will then again be cooler than a 3900x and probably a 9900k at that power usage. From there though problems get bad real quick. You disable tua like people did on the 9900k. Well then you need a cooler that can take 200w. So the this H100i (again rated at noise level) won't work. If you then enable MCE 300-330w. Meaning you need a cooler twice as good as the H100i (at the rated noise level).
The 10900k handles power usage better then the 9900k with its quicker better transfer to the HSF fan. So that when everything and I mean everything is equal. Temps are lower on it. What it doesn't do is make a cooler better then it is.
I don't know why you keep repeating those 400W. This has nothing to do with how much heat the CPU generates.
Dang should have responded to this before I wrote up everything else. If you don't don't how wrong you are, everything else probably doesn't make any sense.