Article Tom's Hardware Core i9 9900KS Preview

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Zucker2k

Golden Member
Feb 15, 2006
1,810
1,159
136
Power efficiency is incredible for 14nm.
Edit: Consumes 50 watts less than the i9 9900K @ 5GHz.

b9cKswyLVFE9iMuovm9wtf-650-80.png

Link
 
Last edited:

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
8,124
3,061
146
Seems like they are taking advantage of the silicon lottery to charge more. Still, better silicon is better silicon, and some people will pay the cost to have a super fast 8 core CPU. The question is would they be better off with a more competitive price, IMO.
 
  • Like
Reactions: Gikaseixas

chrisjames61

Senior member
Dec 31, 2013
721
446
136
Because gaming is my primary use, and I dont really care about productivity? Each to his own, but maybe you could consider occasionally that other uses may have different priorities than you.



Fair enough. That being said I think the majority of enthusiasts consider value and have more well rounded use of their pc than just gaming.
 

jpiniero

Lifer
Oct 1, 2010
16,490
6,983
136
Fair enough. That being said I think the majority of enthusiasts consider value and have more well rounded use of their pc than just gaming.

Value definitely, more rounded no. Intel doesn't have a good answer for the 2600 at $120 for instance, when you factor that 4C4T isn't enough at this point.
 

chrisjames61

Senior member
Dec 31, 2013
721
446
136
Value definitely, more rounded no. Intel doesn't have a good answer for the 2600 at $120 for instance, when you factor that 4C4T isn't enough at this point.



"more rounded" pertained to Ryzen being the better all around cpu's if you factor in other areas besides gaming.
 

jpiniero

Lifer
Oct 1, 2010
16,490
6,983
136
"more rounded" pertained to Ryzen being the better all around cpu's if you factor in other areas besides gaming.

That's what I'm saying, gaming is pretty much the only main heavy usage. The other apps used would be things like Chrome and Office where single/low thread rules but it would be tough to find value in buying a more expensive chip for that.

Edit: For HEDT, e-peen is what matters there, and Intel's going to get killed, esp assuming Threadripper 3 is no worse in gaming than Matisse.
 
Last edited:

DrMrLordX

Lifer
Apr 27, 2000
22,692
12,638
136
When sunny cove arrives are people going to forget single core boost and emphasize IPC?

When? More like, if? Sunny Cove may never make it to the desktop. I'm still holding out hope that desktop Rocket Lake will be Sunny Cove . . . but it may not be.
 

Atari2600

Golden Member
Nov 22, 2016
1,409
1,655
136
The data is there, maybe one could, you know, let the reader make up his own mind, without hyperbolic commentary.

But literally the only situation in which you could recommend a 9900KS is if that user is:
- not doing any tasks that benefit from increased scaling
AND
- playing only games
AND
- playing only games at a resolution of 1920x1080 on a single screen
AND
- needs the performance gained from 180 to 195 FPS.

Otherwise, save money by getting the 3700X or get more performance by getting the 3900X.


That is not hyperbole. Its fact. 9900KS is chasing the niche[1] of a niche[2] of a niche[3].


[1]1080p gaming
[2]Game only PC users
[3]High end PC users
 

jpiniero

Lifer
Oct 1, 2010
16,490
6,983
136
Otherwise, save money by getting the 3700X or get more performance by getting the 3900X.

That's why I mentioned the 9700K... the 9700K has enough of a gaming perf gap that both of those are not that great of a deal. The 2600 and 3600 are much better deals.
 
  • Like
Reactions: lightmanek

Zucker2k

Golden Member
Feb 15, 2006
1,810
1,159
136
When set at 95W LT the 9900K will go up to 119W briefly and wont reach 130W while they are talking of sustained load, wich mean 130W permanently.

So much for your 95W pulled out of nowhere, the scores say it all, in NAMD it is running at 4.63GHz or so while the water cooled 9900KS is at 4.94, air cooled it is at 4.87...
My post wasn't an opinion, it was based on the reviewer stating they turned MCE off. Normally, with MCE off, an Intel cpu of the K(S) suffix reverts to stock settings. Why would the reviewer even mention it otherwise?

The slide you posted suggests the 9900K gained 0.15 points with a 300MHz overclock (if MCE is active), while the 9900KS only gained 0.08 points with a 200MHz overclock. This suggests the software doesn't scale well with frequency. The 3700x result compared to the 3900x also suggests the same.

Now, I've already posted that I suspect the 9900K was running at 4.7GHz, but in light of the reviewer stating MCE is off, what to do? Lol

Edit: the 9900K in stock mode will still turbo opportunistically.
 
Last edited:

Abwx

Lifer
Apr 2, 2011
11,835
4,789
136
My post wasn't an opinion, it was based on the reviewer stating they turned MCE off. Normally, with MCE off, an Intel cpu of the K(S) suffix reverts to stock settings. Why would the reviewer even mention it otherwise?

The slide you posted suggests the 9900K gained 0.15 points with a 300MHz overclock (if MCE is active), while the 9900KS only gained 0.08 points with a 200MHz overclock. This suggests the software doesn't scale well with frequency. The 3700x result compared to the 3900x also suggests the same.

Now, I've already posted that I suspect the 9900K was running at 4.7GHz, but in light of the reviewer stating MCE is off, what to do? Lol

Edit: the 9900K in stock mode will still turbo opportunistically.


Gideon s post is telling, and Computerbase.de made an extensive article about the 9900K, for power being limited to "stock" you have to set the bios accordingly, and in that case it will run at 119W at most if temperature is low enough, otherwise it will stick to 95W.

If setting is not 95WLT/119W then it will clock as much as allowed by the cooling apparatus up to 210W, this way frequency is maxed out whatever the software, in Cinebench it will clock straight at 4.7 all cores at 130W and even in Prime 95 it will stick to this frequency with 185W, wich is below the 210W limit..


As for the software scaling, if the 9900K scale accordingly and the 9900KS does not this would be still due to software bad scaling, not the 9900KS scaling not as well as the 9900K ?

THG mentionned that the 5.2 clocked 9900KS doesnt perform accordingly to its frequency for reasons they didnt elaborate, likely that the high frequency at the limit of the process increase clock jitters such that the pipeline has to skip a cycle here and there to keep coherency robust enough, and will give up some IPC as a consequence.
 

TheGiant

Senior member
Jun 12, 2017
748
353
106
Gideon s post is telling, and Computerbase.de made an extensive article about the 9900K, for power being limited to "stock" you have to set the bios accordingly, and in that case it will run at 119W at most if temperature is low enough, otherwise it will stick to 95W.

If setting is not 95WLT/119W then it will clock as much as allowed by the cooling apparatus up to 210W, this way frequency is maxed out whatever the software, in Cinebench it will clock straight at 4.7 all cores at 130W and even in Prime 95 it will stick to this frequency with 185W, wich is below the 210W limit..


As for the software scaling, if the 9900K scale accordingly and the 9900KS does not this would be still due to software bad scaling, not the 9900KS scaling not as well as the 9900K ?

THG mentionned that the 5.2 clocked 9900KS doesnt perform accordingly to its frequency for reasons they didnt elaborate, likely that the high frequency at the limit of the process increase clock jitters such that the pipeline has to skip a cycle here and there to keep coherency robust enough, and will give up some IPC as a consequence.
We all know how the power/freq curve of 9900K

the point is if Intel manages somehow to move 9900KS to the left (less W at the same f) or its just cherry pick of binning

anyway, ok we achieved 5GHz lets move to effiency even in the high perf area
 

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
Typical Intel product thread, the usual suspects ( the ones that have zero recent Intel products ) are all here. When 7700K was released they were here, claiming it is "binned 6700K", when 8700K was released it was impossible for Intel to add more cores in same power envelope and +50% TDP increase was being suggested, the same happened with 9th gen products. Intel kept on disappointing them with what is now ancient Skylake core design and incremental improvements to 14nm process.

The jury is still out on 9900KS, Intel must run out of 14nm runway eventually, but results of this one sample @ 5ghz are unlike 9900K, despite the claims from said forum warriors.
 
  • Like
Reactions: Pilum

Zucker2k

Golden Member
Feb 15, 2006
1,810
1,159
136
The jury is still out on 9900KS, Intel must run out of 14nm runway eventually, but results of this one sample @ 5ghz are unlike 9900K, despite the claims from said forum warriors.
This. That's what this is all about, at the end of the day. Such power savings/efficiency on the same node is definitely going to draw its skeptics. For now, we can only speculate and wait.
 

DrMrLordX

Lifer
Apr 27, 2000
22,692
12,638
136
, but results of this one sample @ 5ghz are unlike 9900K, despite the claims from said forum warriors.

How? Power-wise, sure, but performance-wise? Unless @Abwx is right and Intel is losing IPC somehow to hit those power targets?

Also, the 7700k was nothing more than a binned 6700k. Actually it was more like an 8320e (versus 8320). Same chip, different voltage/clockspeed curve.

Such power savings/efficiency on the same node is definitely going to draw its skeptics.

Why? It's just an efficient 9900K. It's not like it's parting the Red Sea or anything. Sure, having a few more reviews might be a good thing, but if this is what Intel wants to sell at $560 then let them. It's not exactly a compelling product.
 
  • Like
Reactions: spursindonesia

VirtualLarry

No Lifer
Aug 25, 2001
56,571
10,206
126
Also, the 7700k was nothing more than a binned 6700k.
Do we really want to re-hash this? 7700K is Kaby Lake, 6700K is Skylake, didn't Kaby Lake have new media-decoding capabilities, like being able to decode VP9 @ 4K60? I certainly remember something like that with the lower-end Pentium variety of both Skylake and Kaby Lake, in my DeskMini units, when I got my 4K UHD screens.
 

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
9th gen Intels seem to have more transistor degradation at or around 1.32 from what I've been reading on the internet. Hopefully that isn't the case for anyone seeking to OC this.
 

DrMrLordX

Lifer
Apr 27, 2000
22,692
12,638
136
Do we really want to re-hash this? 7700K is Kaby Lake, 6700K is Skylake, didn't Kaby Lake have new media-decoding capabilities, like being able to decode VP9 @ 4K60? I certainly remember something like that with the lower-end Pentium variety of both Skylake and Kaby Lake, in my DeskMini units, when I got my 4K UHD screens.

For the purpose of the discussion at hand, updates to the iGPU and/or fixed functions of the chip are (and were) mostly immaterial. If you run a Skylake and Kabylake 4c/8t at exactly the same clockspeed, they perform identically. The Kabylake just chews up less power doing it on average. Interestingly, the 9900KS doesn't even offer that element of differentiation. It's still just CoffeeLake.
 

coercitiv

Diamond Member
Jan 24, 2014
7,224
16,977
136
Do we really want to re-hash this? 7700K is Kaby Lake, 6700K is Skylake, didn't Kaby Lake have new media-decoding capabilities, like being able to decode VP9 @ 4K60? I certainly remember something like that with the lower-end Pentium variety of both Skylake and Kaby Lake, in my DeskMini units, when I got my 4K UHD screens.
For the purpose of the discussion at hand, updates to the iGPU and/or fixed functions of the chip are (and were) mostly immaterial. If you run a Skylake and Kabylake 4c/8t at exactly the same clockspeed, they perform identically. The Kabylake just chews up less power doing it on average. Interestingly, the 9900KS doesn't even offer that element of differentiation. It's still just CoffeeLake.

1571748892894.png

They did not perform identically when it came to certain encode/decode tasks. That hybrid decode support meant not only increased power usage, but also limited support and limited performance for high bitrate content.

https://arstechnica.com/gadgets/201...ing-pc-kaby-lake-cpu-windows-10-edge-browser/
There's also the matter of hardware decoding support for 10-bit HEVC, the 4K codec used by Netflix and other streaming services. Currently, only Intel's seventh generation Kaby Lake processors support 10-bit HEVC decoding. Older sixth generation Skylake CPUs only support 8-bit HEVC decoding.
 
  • Like
Reactions: VirtualLarry

VirtualLarry

No Lifer
Aug 25, 2001
56,571
10,206
126
For the purpose of the discussion at hand, updates to the iGPU and/or fixed functions of the chip are (and were) mostly immaterial. If you run a Skylake and Kabylake 4c/8t at exactly the same clockspeed, they perform identically. The Kabylake just chews up less power doing it on average. Interestingly, the 9900KS doesn't even offer that element of differentiation. It's still just CoffeeLake.
Well, I will agree that the core micro-architecture of the CPU cores, between Skylake and Kaby Lake was (allegedly) "the same". That doesn't mean that it's the same exact piece of silicon, just binned differently, because of the media-decoding block additions. It was a different die. (Edit: Therefore, not "the same CPU".)
 

amrnuke

Golden Member
Apr 24, 2019
1,181
1,772
136
ok, this is a bit puzzling to me because i associate die shrink with increased clockspeed. But i am convinced that if AMD wanted, purely for marketing reasons, they could make a CPU that can clock 5Ghz, regardless of what die size they need to make it at; do you need 14nm, 10nm, 7nm, whatever, as long as you put it out, and it doesn't really matter if it's IRL less useful than a higher core count CPU with a better IPC because it's a marketing war, not a performance war. You could literally target a specific application, i.e make a CPU designed to run GTAV, and sell it, then make another targeted at Photoshop, and so forth.

Because, from one point of view, the only people who realistically profit from higher computing power today are not gamers who need 160fps, but people who have a professional workload, and they without a doubt need a higher core count over a higher clockspeed.
Gamers, who pick a higher clockspeed CPU for increased framerates, are buying into an old and frankly debunked ideology, that it will future proof the PC, while instead buying cheap and then upgrading later on is better.
If one CPU pushed GTA to 147 fps and another to 138, they are essentially the same as they both overachieve; the only reason why you'd want the 147 one is because you think "when both these cpus are old, the 147 one will have more power" but by that time you could be upgrading with another midrange CPU, helped by the money you have saved.

So why do people care about hitting 5Ghz? because people are people, they are instinctive, easily fooled, emotive, and will pick 5 over 4.9 because 5 is a cool round number, an imaginary target. I bet you that you could have two architectures where a 4.9 slightly outperforms 5 and people would still gravitate towards the 5.
Let me remind you to google "third pound burger" - people thought that a 1/3 of a pound was less than 1/4 of a pound because 4 is bigger than 3 :/ (<- my face)

It's also annoying on the AMD lineup how the base clockspeeds are counterintuitive, some models that boost higher have a lower base, which would leave many a shopper in wtfland trying to figure out which one is the better CPU, while Intel screams victory with their nuclear-reactor-melting 9900K because their marketing department understands how to sell CPUs.
Generally, each die shrink has come with slower boosts for Intel, initially, which then increase over time with optimization. But for AMD on Zen/+/2 the die shrinks have come with higher boost speeds.

Intel:
Last iteration of 45nm was 1st gen Core i7 top boost 3.73 GHz.
First iteration of 32nm was 1st gen Core i5 top boost 3.6 GHz.
Last iteration of 32nm was 3rd gen Core i7 (Sandy-E) top boost 4.0 GHz.
First iteration of 22nm was 3rd gen Core i7 (Ivy) top boost 3.9 GHz.
Last iteration of 22nm was 4th gen Core i7 top boost 4.4 GHz.
First iteration of 14nm was 5th gen Core i7 top boost 3.8 GHz.
Iteratively over the generations, Intel has gotten 14nm up to 5 GHz.

AMD:
Zen 14nm - top boost 4.05
Zen+ 12nm - top boost 4.4
Zen2 7nm - top boost 4.6 (reportedly 3950X will be 4.7)

What is so remarkable is that while Intel have always had issues with clock speeds with die shrinks, AMD haven't.

And AMD are doing what we expect: die shrinks should bring similar clocks at lower power, or higher clocks at the same power.

Now, the issue with AMD this cycle is that higher clocks on smaller processes holds true - but only up to a point - as die shrinks progress, current leak and heat density become big issues. Overall heat on 7nm is similar to, perhaps even less than, 12nm, but with the chiplet design, higher clock, and the overall smaller dissipation area, the heat density is higher. So IMO the clock speed right now is heat density-constrained more than anything. This will be resolved with Threadripper. I'm not sure how they're going to solve it with 3950X unless they are binning the hell out of the chips and/or designing a special oriented cooling solution that will more effectively dissipate heat off the two full CCXs.