Intel Core i9-9900K Tested in 3DMark Clocking Up To 5ghz ,faster than Ryzen 2700

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ub4ty

Senior member
Jun 21, 2017
749
898
96
Exactly, no one called the 1080 Ti overpriced compared to Vega 64, and we are talking about a similar gulf in price/performance here.

If its a market leading CPU it will be priced to match, I call it the 'flagship tax'. In fact I'm somewhat surprised at the $450 price point, I was expecting it to be in excess of $500, especially with the i9 moniker
GPUs have a direct function towards gaming and high FPS.
GPUs tend to be maxed out in such applications.
CPUs especially at 8 core counts are hardly ever for the average person. A lot of people like to pretend they do or that every little bit of performance matters with a CPU of such caliber but its a joke tbqh. The grand majority of games do not max out such processors. I also find the common comment that people use : I do CPU transcoding/encoding to be a gimmick as well. The grand majority of people overspec because computer building has a lot of enthusiast. Also, I'm not sure if people caught on but the max processors available from AMD are 2700/2700x matching with 1700/1700x. They have yet to release the 2800/2800x. I'm assuming since they are now the leader in my mind, they can wait to play the games intel used to play w/ their releases. I'm sure the 2800/2800x will be announced shortly after Intel plays their last card for this grouping of processors. Then were off to 7nm land where Intel has absolutely no answer to.

Also, the point about the 1080ti is that it is overspec'd to be quite honest. I am thankful people enjoy blowing money on such products as it makes the others down the line cheaper but a 1070 is just fine as is a 1080. The 1080ti gets into absurdity levels especially in relation to power consumption as does the Vega64. As I can imagine the 2800/2800x will be. I call the pricing of such products : idiot tax. I hope I dont offend anyone. I always try to go for the better performance/value middle ground because by the time things actually catch up with such products, a new line is out that trumps the prior flagship for far more reasoned costs.

You don't 'lead' now-a-day by the best performance products. You lead by the best performance/value.
This isn't the early 2000s where people were dying for a single core-> dual core leaps or dual core -> quad core because their processor was absolutely slammed when they opened more than 6 tabs. We have 8 core costs at $170 and most cores are idle and under-clocked for power consumption reductions for the grand majority of a computer's life.
 

ub4ty

Senior member
Jun 21, 2017
749
898
96
Price/performance never scales linearly with flagship products, by your very same comparison a 2700X at $330 is also grossly overpriced compared to a $170 1700, you're paying almost double for a 15% performance boost?
Why do you think I bought another 1700 vs a 2700x? I'm consistent and the logic holds.
I'm not paying double for 15% performance boost... Not in an era where hardware is more than capable and beyond requirements.

In a high end PC, let's say one costing $2000, a CPU might account for 20 - 25% of the overall cost of the system. If paying an extra 5% in overall cost lets you have the best in class processing power, then I'd argue it's worth paying the premium, as that could net you 20% better performance when CPU bound.
For my PC that cost around $2000, I have an $850 16 core thread-ripper processor w/ 64 PCIE lanes, 3 NVME slots, and the ability to handle 128GB of RAM. Again, price/performance mattered and AMD absolutely blows Intel out of the water here again. The reason for purchasing such a setup is because I needed more cores to scale with more I/O.

This funky and muddled area of ridiculously priced 8 core processors w/ garbage tier I/O is the kind of crap that intel loves to exploit and I grew tired of it years ago. They did the same thing w/ Quad-core.
Intel needs to make one goddamn socket for their processor line, perfect it, and stop playing games w/ PCIE slots, cache sizes, and other arbitrary CPU features. It will be cheaper to produce and easier to sell.. K.I.S.S

They try to come up w/ these retarded wedge markets and exploit it by arguing that they have the premium performance product or some stupid obscure feature no one uses. Nobody asked for such a product tbqh. CPUs dynamically under-clock the crap out of cores to conserve energy. What I want to see is some real journalism and reviews that show how much time these insanely clocked processors spend on memory stalls waiting for data, get underneath these stupid metrics like Instructions per clock and highlight real world performance when you have to sync data between cores.

Everyone knows the major bottle-neck is memory latency and storage latency. This is why the clock battles stopped and the core battles began. Nothing has changed here.

I'll be honest, I found it quite comical when Instructions per clock became a mainstream metric as if people understand CPU architecture enough to make this a metric worthy of discussion. Meanwhile, there are all sorts of gotchas in a micro-arch that make such a varied metric useless.
 

Abwx

Lifer
Apr 2, 2011
10,956
3,474
136
9900k will turbo to 4.7 all cores.

All cores when 8C/8T loading wise, wich is technically correct if they state an all core 4.7 turbo...


Isn't TDP derived from the base clock? Pretty sure a 4.7GHz 9900K would pull more than 95W under full load - a 8700K already does that. Simple maths suggests it could be a 125W chip as you said, perhaps a bit more since its also running at higher clocks than the 8700K.

WR to performance, if the final clockspeeds are indeed 4.7GHz ACT, I struggle to see how you it wouldn't be significantly faster than a 2700X, which turbos to 4GHz ACT. Intel also enjoys an IPC advantage, so I'd expect it to beat a 2700X by an average of 20% or more in applications.


Dunno how they ll market the thing but for sure that 10% higher all core turbo imply rougly 25% more power even before taking account of the 33% added cores...

FTR a 8700K should be below 95W@4.3GHz for usual loads but if AVX2 or FMA are used, like in X265 or other Prime 95, TDP will increase to 110-115W.


According to your charts, even a 7820X already beats the 2700X slightly, and that is clocked at the same 4GHz as the 2700X. Compared to the 7820X, a 9900K has a larger cache (16MB vs 11MB), 17.5% higher clocks and is ringbus based, which helps latency sensitive apps and gaming. Speaking of gaming, don't think we will see it perform much better than a 8700K, and most of those gains will probably be due to the higher clocks rather than the extra 2 cores / 4 threads.

The same charts say that a i7 6800K is as fast as a 8700K in Winrar and 7ZIP, wich is due to the RAM channels count, so the 7820X cant be used as a minimal base and is prove that the ring bus doesnt help that much compared to increasing RAM channels.

https://www.hardware.fr/articles/965-2/performances-applicatives.html

Assuming same fequency as the 8700K and apparent scaling of 95% the 9900K will be, as said, barely 10% faster than a 2700X, and that would require a 125W TDP@4.3 turbo on 8C/16T.
 

RichUK

Lifer
Feb 14, 2005
10,320
672
126
9900k @ 5ghz all cores and 4k mems - that'll be my goal.

And a stretch target of 5.2ghz and 4.2k mem on a z370 apex :cool:
 
  • Like
Reactions: lightmanek

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
Assuming same fequency as the 8700K and apparent scaling of 95% the 9900K will be, as said, barely 10% faster than a 2700X, and that would require a 125W TDP@4.3 turbo on 8C/16T.

If you expect a 9900K, with a 17.5% higher frequency, plus higher IPC, to be 'barely 10% faster' than a 2700X, then you are suggesting the 2700X will have higher IPC than the 9900K...

I'll maintain my prediction of at least 20% faster overall, probably closer to 25% if you include AVX workloads. Not long to go now to see who is correct...
 
  • Like
Reactions: PeterScott

ub4ty

Senior member
Jun 21, 2017
749
898
96
If you expect a 9900K, with a 17.5% higher frequency, plus higher IPC, to be 'barely 10% faster' than a 2700X, then you are suggesting the 2700X will have higher IPC than the 9900K...

I'll maintain my prediction of at least 20% faster overall, probably closer to 25% if you include AVX workloads. Not long to go now to see who is correct...
https://www.theinquirer.net/inquire...pu-to-take-on-intels-8-core-coffee-lake-chips

Seems people are forgetting this...
It's a 2700/2700x for a reason.
 
  • Like
Reactions: DarthKyrie

RichUK

Lifer
Feb 14, 2005
10,320
672
126
It wont, unless they increse TDP drastically, and even then the difference will amount to the frequency advantage.

At stock and assuming that it s the same "95W" as a 8700K it shouldnt get higher than 3.8 turbo for all cores, if they push up to 125W this will be barely 10% faster than a stock 2700X, even in games....

Use the 8700K as reference in the charts below and draw your own conclusions :

https://www.hardware.fr/articles/975-17/indices-performance.html

getgraphimg.php

getgraphimg.php

It will. Plain and simples.
 

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136

Abwx

Lifer
Apr 2, 2011
10,956
3,474
136
If you expect a 9900K, with a 17.5% higher frequency, plus higher IPC, to be 'barely 10% faster' than a 2700X, then you are suggesting the 2700X will have higher IPC than the 9900K...

I'll maintain my prediction of at least 20% faster overall, probably closer to 25% if you include AVX workloads. Not long to go now to see who is correct...

They are limited by the virtue of the TDP, 33% more cores mean roughly 25% more power, at 4.7 this would increase on top by 20% for a 50% grand total, they ll have to increase noticeably both TDP (to 125W) and AVX2 frequency offset.
 

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
They are limited by the virtue of the TDP, 33% more cores mean roughly 25% more power, at 4.7 this would increase on top by 20% for a 50% grand total, they ll have to increase noticeably both TDP (to 125W) and AVX2 frequency offset.
Just wait for reviews, happy to bookmark this and come back in a couple of weeks to see who was correct. All I'll say is that a 9900K will as as 'limited' by the a 95W TDP as a 2700X is - they will both exceed that figure under full load.
 

Thunder 57

Platinum Member
Aug 19, 2007
2,675
3,801
136
... I also find the common comment that people use : I do CPU transcoding/encoding to be a gimmick as well...

I don't see how that is a gimmick?

This funky and muddled area of ridiculously priced 8 core processors w/ garbage tier I/O is the kind of crap that intel loves to exploit and I grew tired of it years ago. They did the same thing w/ Quad-core.
Intel needs to make one goddamn socket for their processor line, perfect it, and stop playing games w/ PCIE slots, cache sizes, and other arbitrary CPU features. It will be cheaper to produce and easier to sell.. K.I.S.S

Intel has been grossly mismanaged for some time now. That is why BK is out, regardless of what the official reason was. With any luck we will be seeing less of this segmentation. If the leaks about the 9th generation are true though, with HT only on the top CPU, perhaps not.

Everyone knows the major bottle-neck is memory latency and storage latency. This is why the clock battles stopped and the core battles began. Nothing has changed here.

Not quite. The clock speed battles stopped because of Dennard Scaling.

I'll be honest, I found it quite comical when Instructions per clock became a mainstream metric as if people understand CPU architecture enough to make this a metric worthy of discussion. Meanwhile, there are all sorts of gotchas in a micro-arch that make such a varied metric useless.

Not sure I follow. The average user won't know enough about IPC to care about it, but it is still an important metric. Are you saying it is useless because it is workload dependent?
 

ub4ty

Senior member
Jun 21, 2017
749
898
96
I don't see how that is a gimmick?
The extent to which people reference it as a use case is over-stated.
The vast majority of people aren't maxing out their CPUs. Unless you have a professional video production company, I doubt you're maxing out a processor doing encoding 24/7 such that its essential to get a 10% reduction in encode times. Leave it running overnight and magically it's done. Add more cores and magically it gets done faster. This is why 16 core+ processors exist and now AMD is launching a 32 core. Clocks aren't the key to everything. I have a dual core, quadcore, 8 core, and 16 core available to me at a moment's notice. I spend the majority of my time on a dual core. Less power and you don't need a boatload of cores for reading documents and typing in code. When I need to do a compute task, I fire up either my 8 core machines or 16 core. I game on a quad core w/ a dusty maxwell 2GB vidya and I have no issues w/ performance. I don't game on my work machines because they're for work. I have a mix of about 6 Pascal 1070s/1080s. I have never gamed on a single one of them.

Not quite. The clock speed battles stopped because of Dennard Scaling.
Yes, physics which makes for the claim that you one company can do any reasonable clocks above the competition at the current transistor size at 95W laughable. The 8700k is a 6 core processor and it doesn't even claim the clocks that are stated by the i9-9900k. This is where the gimmicks come in of course. If people think you can get even higher clocks on 8 cores, you need your head checked. As for my earlier comment, jamming clocks higher and higher even if physics allows for it hits issues when your memory is much slower and you're hitting memory stalls. This is why, in professional server land where some rigs have up to a terabyte+ of memory, the key focus is on cores and not clocks. An $8,000 professional processor has half the clock speed of a i9-9900k. The idea that some mass of special people are out there doing professional tasks that need 5GHz speeds over core counts is a gimmick.

Memory stalls are real. If your processor ran at 10Ghz, the majority of the time its going to be running stall instructions waiting on data from memory. This is where clocks become stupid. This is where the 101 e-celebs who do reviews begin scratching their heads when the benchmark performance start to stall no matter how many thousands of dollars they throw at cooling seeking goof-ball level clock speeds.

It seems people haven't been building long enough because there was a period where people cried for years about how slow RAM was and the huge bottle-neck it created for CPUs. Then it was storage latency. A 6GHz processor, if one were to exist would need a completely different memory piping than what we have now. This is where things become architectural issues not just something you can solve by Overclocking a processor.

Not sure I follow. The average user won't know enough about IPC to care about it, but it is still an important metric. Are you saying it is useless because it is workload dependent?
IPC is a stupid metric to judge a processor on without talking about micro-architectural details that wildly alter IPC based on varied compute/data flows.
Are you saying it is useless because it is workload dependent?
Yes, it's varies wildly based on the workload.
It's literally program dependent and instruction dependent.

Unless you have a degree in computer engineering (computer architecture), you likely have no clue what this metric means and as advertised is meaningless because there is no deeper detail provided. I have a funny feeling most people are just taking CPU execution time and reverse engineering a goofy figure to appear more technical.

If a CPU takes 30% less time to execute a program, that's the figure that matters. Trying to appear more technical by reverse engineering this figure in some crude and likely incorrect manner is beyond ridiculous. When I heard the term IPC being thrown around, I was wondering what in the world was going on that people are talking about interprocess communication. Then someone tells me : dude, it stands for (Instructions per count) as if they knew what the heck that meant or its significance.

It's like the way AI is tossed around. Not a single thing is AI it's just statistics but the lay is convinced they're on to something.

29e.jpg

JUST..
 
Last edited:

ub4ty

Senior member
Jun 21, 2017
749
898
96
And perhaps you are forgetting how close to its clockspeed ceiling a 2700X already is. 4.3GHz seems to be the absolute Max, with 4.2GHz being the norm for most 2700Xs. That doesn't leave a lot of room for a 2800X unless AMD has managed to drastically improve the process to accommodate higher clocks.
I'm not forgetting anything. I know that physics and heat dissipation applies to everybody. Nobody can defy physics. So I know a special game of foolery is going on with the numbers published thus far from Intel and I know they have a history of it. That goofy i9-9900K will be right around the same if they don't outright gut it. Also, the 8700k is a 6 core processor. I have no clue why people keep referencing it in comparison.

The base clock of the Core i9-9900K is 3.6Ghz. That's all I need to see.
The rest is literally a gimmick. Surprise, the 2700x has around the same base.
 

moonbogg

Lifer
Jan 8, 2011
10,635
3,095
136
And perhaps you are forgetting how close to its clockspeed ceiling a 2700X already is. 4.3GHz seems to be the absolute Max, with 4.2GHz being the norm for most 2700Xs. That doesn't leave a lot of room for a 2800X unless AMD has managed to drastically improve the process to accommodate higher clocks.

This is why a 2800X doesn't excite me. The 2700X is already the best they got to offer and everyone knows it. What can they do? Push the thing a tiny bit more to 4.4ghz with insane voltage or something? The 2700X is already as good as its going to get. They won't catch Intel's 14nm clocks, so the only thing they can do is release a higher core count chip or something. At least that way they can still compete in the multicore stuff while getting rekt in single core as usual, at least until 7nm. 2800X does not excite.
 

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
I'm not forgetting anything. I know that physics and heat dissipation applies to everybody. Nobody can defy physics. So I know a special game of foolery is going on with the numbers published thus far from Intel and I know they have a history of it. That goofy i9-9900K will be right around the same if they don't outright gut it. Also, the 8700k is a 6 core processor. I have no clue why people keep referencing it in comparison.

The base clock of the Core i9-9900K is 3.6Ghz. That's all I need to see.
The rest is literally a gimmick. Surprise, the 2700x has around the same base.

Base clock has almost zero bearing on actual performance. A 8700K has a base clock of 3.7GHz and runs all day at 4.3GHz.

How you can dismiss the 9900K as a 'gimmick' without having actually seen official reviews is beyond me. If and when it's proven that it is stuck at 3.6GHz in most cases, then I would agree with you.
 
  • Like
Reactions: PeterScott

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
First of all,

Ryzen R7 2700 = 65W TDP
Ryzen R7 2700X = 105W TDP

Core i7 9900K = 95W TDP

Secondly,

If we use a 95W TDP Heat-sink on the Core i7 9900K, it will not maintain all core Turbo at 4.7GHz.
Almost every review will use WaterCooling with a 150-200+ Watts of TDP capacity. This way the CPU will be able to increase the time it will hold the All Core Turbo , thus will increase the power consumption and performance above the default 95W TDP (TDP-up)

Thermal Management (page 88)
https://www.intel.com/content/www/u...core/8th-gen-core-family-datasheet-vol-1.html

5.1.4 Configurable TDP (cTDP) and Low-Power Mode Configurable TDP (cTDP) and Low-Power Mode (LPM) form a design option where the processor's behavior and package TDP are dynamically adjusted to a desired system performance and power envelope. Configurable TDP and Low-Power Mode technologies offer opportunities to differentiate system design while running active workloads on select processor SKUs through scalability, configuration and adaptability. The scenarios or methods by which each technology is used are customizable but typically involve changes to PL1 and associated frequencies for the scenario with a resultant change in performance depending on system's usage. Either technology can be triggered by (but are not limited to) changes in OS power policies or hardware events such as docking a system, flipping a switch or pressing a button. cTDP and LPM are designed to be configured dynamically and do not require an operating system reboot.
Note: Configurable TDP and Low-Power Mode technologies are not battery life improvement technologies.

5.1.4.1 Configurable TDP Note: Configurable TDP availability may vary between the different SKUs. With cTDP, the processor is now capable of altering the maximum sustained power with an alternate processor IA core base frequency. Configurable TDP allows operation in situations where extra cooling is available or situations where a cooler and quieter mode of operation is desired. Configurable TDP can be enabled using Intel's DPTF driver or through HW/EC firmware. Enabling cTDP using the DPTF driver is recommended as Intel does not provide specific application or EC source code. cTDP consists of three modes as shown in the following table.

11CevQb.jpg


and

Table 5-1. Configurable TDP Modes (Sheet 1 of 2) Mode Description Base The average power dissipation and junction temperature operating condition limit, specified in Table 5-2 for the SKU Segment and Configuration, for which the processor is validated during manufacturing when executing an associated Intel-specified highcomplexity workload at the processor IA core frequency corresponding to the

TDP-Up The SKU-specific processor IA core frequency where manufacturing confirms logical functionality within the set of operating condition limits specified for the SKU segment and Configurable TDP-Up configuration in Table 5-2. The Configurable TDP-Up Frequency and corresponding TDP is higher than the processor IA core Base Frequency and SKU Segment Base TDP.
 

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
This is why a 2800X doesn't excite me. The 2700X is already the best they got to offer and everyone knows it. What can they do? Push the thing a tiny bit more to 4.4ghz with insane voltage or something? The 2700X is already as good as its going to get. They won't catch Intel's 14nm clocks, so the only thing they can do is release a higher core count chip or something. At least that way they can still compete in the multicore stuff while getting rekt in single core as usual, at least until 7nm. 2800X does not excite.

Don't tell that to the AMD fans that think AMD must respond with a 2800X or 'lose the game'. Hopefully AMD management isn't as reactionary as the fans. Let Intel have their moment in the sun, I'm sure AMD will have a proper response when 7nm rolls out, but there is no way a '2800X' is going to stop a 9900K having the 'mainstream CPU crown', not with AMD's inability to scale much beyond 4GHz at reasonable power limits.
 
  • Like
Reactions: William Gaatjes

ub4ty

Senior member
Jun 21, 2017
749
898
96
Base clock has almost zero bearing on actual performance. A 8700K has a base clock of 3.7GHz and runs all day at 4.3GHz.

How you can dismiss the 9900K as a 'gimmick' without having actually seen official reviews is beyond me. If and when it's proven that it is stuck at 3.6GHz in most cases, then I would agree with you.
So, a 6 core has a base clock of 3.7Ghz and runs at 4.3Ghz at boost.
Meanwhile at AMD :
https://www.newegg.com/Product/Product.aspx?Item=N82E16819113499
8 core with a base clock of 3.7Ghz with a max boost of 4.3ghz

The gimmick is because the competition can do the same because its the same physical limitation impacting both AMD/Intel.

As for ridiculous overclocking and the insane power consumption associated :
Ultimately, a 9% improvement for a 40% power consumption increase is plainly not worth it

Meanwhile you want me to believe an 8 core processor can beat the 6 core on clocks? No, this is a gimmick. It's only 1 or 2 .. then maybe 3 or 4... and if the stars align, if you spent $400 on cooling, you don't care about an inferno or your electricity bill, and there just so happens to be a blood moon out maybe just maybe 5 cores.

What market is this serving? Vidya? Because this most certainly isn't the kind of crap you shot for if you're doing professional work loads that max out a processor. I hear you on the argument that gamers love this kind of stuff... But you're losing me if you begin arguing that there's professional reasons to be tripping over backwards for these kinds of clocks.

This is what professionals buy
https://ark.intel.com/products/120474/Intel-Xeon-Gold-5120-Processor-19_25M-Cache-2_20-GHz
  • Processor Base Frequency : 2.20 GHz
  • Max Turbo Frequency : 3.20 GHz
Most of the money is spent on RAM because data fetching still is to this day the bottle neck
 
Last edited:

ub4ty

Senior member
Jun 21, 2017
749
898
96
Don't tell that to the AMD fans that think AMD must respond with a 2800X or 'lose the game'.

Absolutely not. Intel isn't anywhere on my radar until their ridiculous prices come down, they standardize their sockets past one generation, and they stop w/ the alphabet soup offering of processors. Intel lost the game the minute Ryzen came out. They lost the game for me in the desktop and in the server market.

Hopefully AMD management isn't as reactionary as the fans. Let Intel have their moment in the sun, I'm sure AMD will have a proper response when 7nm rolls out, but there is no way a '2800X' is going to stop a 9900K having the 'mainstream CPU crown', not with AMD's inability to scale much beyond 4GHz at reasonable power limits.
I'm sure the 2800x is coming out after the 9900k launches and it wouldn't surprise me if AMD decided to specially bin them like intel does for these silly clocks. I am not waiting on either because I never buy such processors but I do feel a special amount of laugher is going to ensue once the 2800x comes out and it will be directed against intel. I am indeed waiting on 7nm for any more purchases both in the GPU/CPU market as well as PCIE 4.0 and hopefully a new System memory standard beyond DDR4
 

ub4ty

Senior member
Jun 21, 2017
749
898
96
First of all,

Ryzen R7 2700 = 65W TDP
Ryzen R7 2700X = 105W TDP

Core i7 9900K = 95W TDP

Secondly,

If we use a 95W TDP Heat-sink on the Core i7 9900K, it will not maintain all core Turbo at 4.7GHz.
Almost every review will use WaterCooling with a 150-200+ Watts of TDP capacity. This way the CPU will be able to increase the time it will hold the All Core Turbo , thus will increase the power consumption and performance above the default 95W TDP (TDP-up)

Thermal Management (page 88)
https://www.intel.com/content/www/u...core/8th-gen-core-family-datasheet-vol-1.html



11CevQb.jpg


and
I want to see some special clown do 4.7Ghz all-core on a Core i7 9900K for 24 hours running Prime95 without burning down a square mile of homes. Then I want to see what the power utilization is. Then I want to know how much the cooling solution costs them. Then I want to laugh when 7nm makes this all an exercise in max stupidity in a year.
 
  • Like
Reactions: DarthKyrie

tamz_msc

Diamond Member
Jan 5, 2017
3,821
3,642
136
Without major process level efficiency improvements the CPU should be drawing around 140W.
Will be interesting to see how Intel does that, since previously the stated TDP has been the sustained power limit (PL1).
It would be pretty sad to see if they start understating the power consumption in the same way AMD does with Pinnacle Ridge.

If it draws 140W at the advertised specs, then advertise it with 140W TDP as well goddammit.
Don't the high-end Z370 boards already ignore those limits because of the different ways the BIOS is configured among different manufacturers?
 

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
I want to see some special clown do 4.7Ghz all-core on a Core i7 9900K for 24 hours running Prime95 without burning down a square mile of homes. Then I want to see what the power utilization is. Then I want to know how much the cooling solution costs them. Then I want to laugh when 7nm makes this all an exercise in max stupidity in a year.
Are you just deliberately inflammatory in your posts or do you actually think a 4.7GHz 9900K will melt motherboards?

We have had 130W+ CPUs for a decade and a half now (Prescott 2003) so it's not like it's anything new. A half decent HSF will do the job, save the fire brigade for actual emergencies...
 
  • Like
Reactions: Thunder 57

TheGiant

Senior member
Jun 12, 2017
748
353
106
I don't get this

The 2700X already draws much more than 95W when properly loaded, we also know the numbers of 8700K (and its increase over 7700K). No big deal for current coolers.

Imo 9900K power will just match 2700X power and also match it when oced (like 2700X 4,2GHz vs 9900K 4.8-? GHz). Just with 25% higher performance- freq and IPC.

The picture will change as AMD releases their 7nm parts and imo match intel in absolute performance. Intel's 10nm fail is such a nice situation for AMD....
 

ub4ty

Senior member
Jun 21, 2017
749
898
96
Are you just deliberately inflammatory in your posts or do you actually think a 4.7GHz 9900K will melt motherboards?

We have had 130W+ CPUs for a decade and a half now (Prescott 2003) so it's not like it's anything new. A half decent HSF will do the job, save the fire brigade for actual emergencies...

Intel Core i7-7820X Skylake-X 8-Core
https://www.newegg.com/Product/Product.aspx?Item=N82E16819117794
- 8 Cores 16 Threads
- Max Turbo Frequency 4.3 GHz
140 Watts
 

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
> 7nm
> PCIE 4.0
> Even higher core count
> Even faster nvme via more layers and faster arm processors

Problem is Zen 2 will still be on AM4 and hence nothing much new except being faster unless they make a new chipset with novel features.

What we actually really need is DMI4.0 on intels side and same on AMD. eg. the connection CPU to chipset must become much faster. I would argue 4 lanes of pcie 4 are still too slow but better than no improvement. Or if all the lanes from CPU are pcie4 I could see some good improvements if mobo makers are clever.

Much more likely is consumers will skip pcie4 completely and we will go to pcie5 directly as the 2 specs very ratified very close after each other.

Other reasons to wait are the meltdown & spectre thing. Who knows whats lurking there and also the insane ram prices. At thi spoint I don't want to invest that much into soon to be obsolete RAM as ddr5 will be available latest in 2020.