Because gaming is my primary use, and I dont really care about productivity? Each to his own, but maybe you could consider occasionally that other uses may have different priorities than you.
Fair enough. That being said I think the majority of enthusiasts consider value and have more well rounded use of their pc than just gaming.
I doubt they can do 5GHz on one core, much less 8.
Value definitely, more rounded no. Intel doesn't have a good answer for the 2600 at $120 for instance, when you factor that 4C4T isn't enough at this point.
"more rounded" pertained to Ryzen being the better all around cpu's if you factor in other areas besides gaming.
When sunny cove arrives are people going to forget single core boost and emphasize IPC?
When sunny cove arrives are people going to forget single core boost and emphasize IPC?
The data is there, maybe one could, you know, let the reader make up his own mind, without hyperbolic commentary.
Otherwise, save money by getting the 3700X or get more performance by getting the 3900X.
My post wasn't an opinion, it was based on the reviewer stating they turned MCE off. Normally, with MCE off, an Intel cpu of the K(S) suffix reverts to stock settings. Why would the reviewer even mention it otherwise?When set at 95W LT the 9900K will go up to 119W briefly and wont reach 130W while they are talking of sustained load, wich mean 130W permanently.
So much for your 95W pulled out of nowhere, the scores say it all, in NAMD it is running at 4.63GHz or so while the water cooled 9900KS is at 4.94, air cooled it is at 4.87...
Edit: the 9900K in stock mode will still turbo opportunistically.
My post wasn't an opinion, it was based on the reviewer stating they turned MCE off. Normally, with MCE off, an Intel cpu of the K(S) suffix reverts to stock settings. Why would the reviewer even mention it otherwise?
The slide you posted suggests the 9900K gained 0.15 points with a 300MHz overclock (if MCE is active), while the 9900KS only gained 0.08 points with a 200MHz overclock. This suggests the software doesn't scale well with frequency. The 3700x result compared to the 3900x also suggests the same.
Now, I've already posted that I suspect the 9900K was running at 4.7GHz, but in light of the reviewer stating MCE is off, what to do? Lol
Edit: the 9900K in stock mode will still turbo opportunistically.
We all know how the power/freq curve of 9900KGideon s post is telling, and Computerbase.de made an extensive article about the 9900K, for power being limited to "stock" you have to set the bios accordingly, and in that case it will run at 119W at most if temperature is low enough, otherwise it will stick to 95W.
If setting is not 95WLT/119W then it will clock as much as allowed by the cooling apparatus up to 210W, this way frequency is maxed out whatever the software, in Cinebench it will clock straight at 4.7 all cores at 130W and even in Prime 95 it will stick to this frequency with 185W, wich is below the 210W limit..
As for the software scaling, if the 9900K scale accordingly and the 9900KS does not this would be still due to software bad scaling, not the 9900KS scaling not as well as the 9900K ?
THG mentionned that the 5.2 clocked 9900KS doesnt perform accordingly to its frequency for reasons they didnt elaborate, likely that the high frequency at the limit of the process increase clock jitters such that the pipeline has to skip a cycle here and there to keep coherency robust enough, and will give up some IPC as a consequence.
This. That's what this is all about, at the end of the day. Such power savings/efficiency on the same node is definitely going to draw its skeptics. For now, we can only speculate and wait.The jury is still out on 9900KS, Intel must run out of 14nm runway eventually, but results of this one sample @ 5ghz are unlike 9900K, despite the claims from said forum warriors.
, but results of this one sample @ 5ghz are unlike 9900K, despite the claims from said forum warriors.
Such power savings/efficiency on the same node is definitely going to draw its skeptics.
Do we really want to re-hash this? 7700K is Kaby Lake, 6700K is Skylake, didn't Kaby Lake have new media-decoding capabilities, like being able to decode VP9 @ 4K60? I certainly remember something like that with the lower-end Pentium variety of both Skylake and Kaby Lake, in my DeskMini units, when I got my 4K UHD screens.Also, the 7700k was nothing more than a binned 6700k.
Do we really want to re-hash this? 7700K is Kaby Lake, 6700K is Skylake, didn't Kaby Lake have new media-decoding capabilities, like being able to decode VP9 @ 4K60? I certainly remember something like that with the lower-end Pentium variety of both Skylake and Kaby Lake, in my DeskMini units, when I got my 4K UHD screens.
Do we really want to re-hash this? 7700K is Kaby Lake, 6700K is Skylake, didn't Kaby Lake have new media-decoding capabilities, like being able to decode VP9 @ 4K60? I certainly remember something like that with the lower-end Pentium variety of both Skylake and Kaby Lake, in my DeskMini units, when I got my 4K UHD screens.
For the purpose of the discussion at hand, updates to the iGPU and/or fixed functions of the chip are (and were) mostly immaterial. If you run a Skylake and Kabylake 4c/8t at exactly the same clockspeed, they perform identically. The Kabylake just chews up less power doing it on average. Interestingly, the 9900KS doesn't even offer that element of differentiation. It's still just CoffeeLake.
There's also the matter of hardware decoding support for 10-bit HEVC, the 4K codec used by Netflix and other streaming services. Currently, only Intel's seventh generation Kaby Lake processors support 10-bit HEVC decoding. Older sixth generation Skylake CPUs only support 8-bit HEVC decoding.
Well, I will agree that the core micro-architecture of the CPU cores, between Skylake and Kaby Lake was (allegedly) "the same". That doesn't mean that it's the same exact piece of silicon, just binned differently, because of the media-decoding block additions. It was a different die. (Edit: Therefore, not "the same CPU".)For the purpose of the discussion at hand, updates to the iGPU and/or fixed functions of the chip are (and were) mostly immaterial. If you run a Skylake and Kabylake 4c/8t at exactly the same clockspeed, they perform identically. The Kabylake just chews up less power doing it on average. Interestingly, the 9900KS doesn't even offer that element of differentiation. It's still just CoffeeLake.
Generally, each die shrink has come with slower boosts for Intel, initially, which then increase over time with optimization. But for AMD on Zen/+/2 the die shrinks have come with higher boost speeds.ok, this is a bit puzzling to me because i associate die shrink with increased clockspeed. But i am convinced that if AMD wanted, purely for marketing reasons, they could make a CPU that can clock 5Ghz, regardless of what die size they need to make it at; do you need 14nm, 10nm, 7nm, whatever, as long as you put it out, and it doesn't really matter if it's IRL less useful than a higher core count CPU with a better IPC because it's a marketing war, not a performance war. You could literally target a specific application, i.e make a CPU designed to run GTAV, and sell it, then make another targeted at Photoshop, and so forth.
Because, from one point of view, the only people who realistically profit from higher computing power today are not gamers who need 160fps, but people who have a professional workload, and they without a doubt need a higher core count over a higher clockspeed.
Gamers, who pick a higher clockspeed CPU for increased framerates, are buying into an old and frankly debunked ideology, that it will future proof the PC, while instead buying cheap and then upgrading later on is better.
If one CPU pushed GTA to 147 fps and another to 138, they are essentially the same as they both overachieve; the only reason why you'd want the 147 one is because you think "when both these cpus are old, the 147 one will have more power" but by that time you could be upgrading with another midrange CPU, helped by the money you have saved.
So why do people care about hitting 5Ghz? because people are people, they are instinctive, easily fooled, emotive, and will pick 5 over 4.9 because 5 is a cool round number, an imaginary target. I bet you that you could have two architectures where a 4.9 slightly outperforms 5 and people would still gravitate towards the 5.
Let me remind you to google "third pound burger" - people thought that a 1/3 of a pound was less than 1/4 of a pound because 4 is bigger than 3 :/ (<- my face)
It's also annoying on the AMD lineup how the base clockspeeds are counterintuitive, some models that boost higher have a lower base, which would leave many a shopper in wtfland trying to figure out which one is the better CPU, while Intel screams victory with their nuclear-reactor-melting 9900K because their marketing department understands how to sell CPUs.
I believe this to be largely true as well. The heat density of 7nm is pretty amazing.So IMO the clock speed right now is heat density-constrained more than anything.