Intel Core i9-9900K Tested in 3DMark Clocking Up To 5ghz ,faster than Ryzen 2700

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
I don't get this

The 2700X already draws much more than 95W when properly loaded, we also know the numbers of 8700K (and its increase over 7700K). No big deal for current coolers.

Imo 9900K power will just match 2700X power and also match it when oced (like 2700X 4,2GHz vs 9900K 4.8-? GHz). Just with 25% higher performance- freq and IPC.

The picture will change as AMD releases their 7nm parts and imo match intel in absolute performance. Intel's 10nm fail is such a nice situation for AMD....

You're surprised? I'm not. It's an Intel chip. certain people with agendas will use whatever angle to put it in a bad light IMO. They can't even use the 'you need a new chipset to run it' argument because it's backwards compatible with Z370, and trust me numerous posters have called my 8700K / Z370 combo a 'dead end platform with no upgrade path'.

Logically, the only thing they can attack is the price and the power consumption or '95W TDP' which has suddenly become an issue even because well, its Intel. A 2700X draws in excess of 140W under full load and no one mentions it, but lo and behold if an Intel chips power usage manages to exceed TDP it's all doom and gloom, it will be stuck at 3.6GHz base clock, it's a gimmick etc etc

Make no mistake, if this was renamed an AMD R7 2900X with boost clocks at 4.7 - 5.0 AMD fans would be lauding it as the second coming...
 
  • Like
Reactions: CHADBOGA and Pilum

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
You're surprised? I'm not. It's an Intel chip. certain people with agendas will use whatever angle to put it in a bad light IMO. They can't even use the 'you need a new chipset to run it' argument because it's backwards compatible with Z370, and trust me numerous posters have called my 8700K / Z370 combo a 'dead end platform with no upgrade path'.

Logically, the only thing they can attack is the price and the power consumption or '95W TDP' which has suddenly become an issue even because well, its Intel. A 2700X draws in excess of 140W under full load and no one mentions it, but lo and behold if an Intel chips power usage manages to exceed TDP it's all doom and gloom, it will be stuck at 3.6GHz base clock, it's a gimmick etc etc

Make no mistake, if this was renamed an AMD R7 2900X with boost clocks at 4.7 - 5.0 AMD fans would be lauding it as the second coming...

I believe you are exaggerating..........

Core i7 9900K at 95W TDP has 3.7GHz base clocks,
R7 2700X at 105W TDP has 3.7GHz base clocks,

Nobody said they both dont go above the base clock TDPs

But, if both are measured at the default TDPs, they should be very close in performance ;)
 
  • Like
Reactions: lightmanek

ub4ty

Senior member
Jun 21, 2017
749
898
96
You're surprised? I'm not. It's an Intel chip. certain people with agendas will use whatever angle to put it in a bad light IMO. They can't even use the 'you need a new chipset to run it' argument because it's backwards compatible with Z370, and trust me numerous posters have called my 8700K / Z370 combo a 'dead end platform with no upgrade path'.

Logically, the only thing they can attack is the price and the power consumption or '95W TDP' which has suddenly become an issue even because well, its Intel. A 2700X draws in excess of 140W under full load and no one mentions it, but lo and behold if an Intel chips power usage manages to exceed TDP it's all doom and gloom, it will be stuck at 3.6GHz base clock, it's a gimmick etc etc

Make no mistake, if this was renamed an AMD R7 2900X with boost clocks at 4.7 - 5.0 AMD fans would be lauding it as the second coming...
> mfw you've lost all your arguments and this is the way you go out.
I own both Intel and AMD rigs. I have bought nothing but Intel until Ryzen came out. When it did, it's price/performance value blew anything intel has out of the water and still does. Hands down. There is zero argument about this which is why I buy it. My Ryzen processor consumes exactly what it states it consumes and every reviewer states this :
https://www.tomshardware.com/reviews/amd-ryzen-7-2700x-review,5571-12.html
Performance rises and power consumption falls (if only slightly). There's truth to AMD's marketing material, so says our lab equipment. Ryzen 7 2700X really does deserve attention for these results.

Intel processors are notorious for not doing this and exceeding the power envelope all the time.

So, what makes the pro-intel argument ridiculous is : It has less price/performance value, it most definitely consumes more power, and clock rates are bound by physical limits no one can exceed in 14nm. So, you win nowhere because everything an intel processor wins in is set upon by a diminishing returns formula and far hefty costs. This is also not an argument because its pure numbers and facts.

You have enough money and you don't care about price/performance value and just want the best performing highest clock processor : INTEL is your choice hands down. But don't for a minute think you can pass this choice off as somehow being of better price/performance value because it is not.

When Intel finally changes this formula around and acts like its competing for my $$$ against real competition, I will buy them again. Not a second before.
 

ub4ty

Senior member
Jun 21, 2017
749
898
96
I believe you are exaggerating..........

Core i7 9900K at 95W TDP has 3.7GHz base clocks,
R7 2700X at 105W TDP has 3.7GHz base clocks,

Nobody said they both dont go above the base clock TDPs

But, if both are measured at the default TDPs, they should be very close in performance ;)
aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9PLzUvNzY1NTA5L29yaWdpbmFsL2ltYWdlMDA4LnBuZw==

He's not exaggerating, he's telling a flat out lie.
Here's the Power draw on PRIME95

See if you can find the Intel 6 core 8700k.
Look next for the 1950x.
Then hilarious find the Intel 8 core 7820x,
This is stock and this why Intel is a flippin joke.

You can't defy physics. If you want Intel's ridiculous clock #'s, you're going to pay in power consumption. What people get testy about is the flat out lies/denial people post to try to justify paying absurd premiums for intel. It happens in almost everything including cars. We get these people, they're common : they like spending far more than they need to on something and then try to justify it with absurdity. My view is : spend your money how you wish. I have no say in that. However, don't try to justify it with absurd arguments.
 
Last edited:
  • Like
Reactions: DarthKyrie

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
I believe you are exaggerating..........

Core i7 9900K at 95W TDP has 3.7GHz base clocks,
R7 2700X at 105W TDP has 3.7GHz base clocks,

Nobody said they both dont go above the base clock TDPs

But, if both are measured at the default TDPs, they should be very close in performance ;)

I'm not, even The Stilt confirmed this with his own testing. But this is the link to the GN review:
https://www.gamersnexus.net/hwrevie...w-game-streaming-cpu-benchmarks-memory/page-3
22_power-cinebench-nt.png


As I said, double standards... AMD can freely exceed TDP and it's brushed under the carpet...

EDIT - Added the stilts findings in the following link, as the poster above me shows, its incredibly easy to cherry pick data to 'back up' your points.

https://forums.anandtech.com/threads/ryzen-strictly-technical.2500572/page-72

The power consumption

When comparing the new flagship 2700X SKU against its predecessor the 1800X, the 2700X provides on average 6.1% higher single threaded performance and 10.2% higher multithreaded performance when using a 400-series motherboard. The improvement however doesn’t come without a cost; Despite the advertised power rating of 2700X has only increased by 10W (or by 10.5%), the actual power consumption has increased by significantly more: over 24%. At stock, the CPU is allowed to consume >= 141.75W of power and more importantly, that is a sustainable limit and not a short-term burst like of limit as on Intel CPUs (PL1 vs. PL2).

Personally, I think that AMD should have rated these CPUs for 140W TDP instead of the 105W rating they ended up with. The R7 2700X is the first AMD CPU I’ve ever witnessed to consume more power than its advertised power rating. And honestly, I don’t like the fact one bit. Similar practices are being exercised on Ryzen Mobile line-up, however with one major difference: The higher than advertised (e.g. 25W boost on 15W rated SKUs) power limit is not sustainable, but instead a short-term limit like on Intel CPUs. The way I see it, either these CPUs should have been rated for 140W from the get-go, or alternatively the 141.75W power limit should have been a short-term one and the advertised 105W figure a sustained one.

Still think I'm exagerrating? ;)
 
Last edited:
  • Like
Reactions: mikk

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
I'm not, even The Stilt confirmed this with his own testing. But this is the link to the GN review:
https://www.gamersnexus.net/hwrevie...w-game-streaming-cpu-benchmarks-memory/page-3


As I said, double standards... AMD can freely exceed TDP and it's brushed under the carpet...

You are exaggerating because nobody said the 95W TDP is an issue. The issue is that when reviews will fly on the net the numbers you will see will not be at 95W TDP for the 9900K or 105W TDP for the R7 2700X.

TB3.JPG.7a795442fe6ad733e08705765084f0f6.JPG


If we only measure the CPUs at their default TDPs of 95W and 105W the performance should be very close because both of them will only be at their BASE Clocks of 3.7GHz.
 

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
You are exaggerating because nobody said the 95W TDP is an issue. The issue is that when reviews will fly on the net the numbers you will see will not be at 95W TDP for the 9900K or 105W TDP for the R7 2700X.

TB3.JPG.7a795442fe6ad733e08705765084f0f6.JPG


If we only measure the CPUs at their default TDPs of 95W and 105W the performance should be very close because both of them will only be at their BASE Clocks of 3.7GHz.

Then both AMD and Intel are at fault, I'm not exaggerating anything, a 2700X will freely consume 140W under default BIOS settings, and so will a 9900K most likely, or at least 125W based on the power consumption of a 8700K.

Everyone is getting bent out of shape over the 95W TDP when in reality it's a meaningless number for enthusiasts who use anything better than a stock Intel HSF, or you run a low end board with poor VRMs that strictly enforces the 95W power limit.

Since 99.9% of people buying a 9900K would already have a Z370 mobo or buy a Z390, that leaves thermal throttling as the real issue with the 95W TDP if you use a HSF that is only rated to dissipate 95W. Basically any HSF around the $30 mark will be able to deal with this, so again it's really a non issue unless you skimp on the cooling on a $450 CPU
 
Last edited:

ub4ty

Senior member
Jun 21, 2017
749
898
96
I'm not, even The Stilt confirmed this with his own testing. But this is the link to the GN review:
https://www.gamersnexus.net/hwrevie...w-game-streaming-cpu-benchmarks-memory/page-3
22_power-cinebench-nt.png


As I said, double standards... AMD can freely exceed TDP and it's brushed under the carpet...

EDIT - Added the stilts findings in the following link, as the poster above me shows, its incredibly easy to cherry pick data to 'back up' your points.

https://forums.anandtech.com/threads/ryzen-strictly-technical.2500572/page-72



Still think I'm exagerrating? ;)
So a 2700x 8 core consumes as much power as a 16core 1950x in Cinebench? Yet consumes its TDP wattage of 105Watts in Prime95?
HAHAHA

And what's the reason for this obvious and absurd discrepancy for every CPU?
Integer pipeline vs. Floating point pipeline... Comp Arch 101
Floating point pipelines typically have 50% more stages than an integer pipeline and when its active all stages are firing. Now who here uses their CPU like a GPU for the majority of their computing workloads crunching floating point OPs? This test became popular because AMD's bulldoozer FPU pipeline was garbage and AMD likes to point out that it isn't any longer. Floating point OPs will wreak havoc on a CPU.

If you want higher cloaks, you're facing ever diminishing returns w.r.t to insanely increased power consumption. Intel nor AMD can beat physics so why would I shell out absurd amounts of money in an attempt to see how diminished the value equation can be? 1700 8 core works just fine and manages to be half the TDP. I can literally buy two 1700s for the price of one 2700x and still beat it on power consumption. Even a 1950x does in cinebench. My overarching point was and will remain : The average person knows nothing about cpu architecture such that they can even begin arguing about the finer details. Most compute hardware is far more than any average person uses. In an age in which 8 core is ubiquitous, I'm not shelling out mountains of money to either AMD or Intel for clocks.

You can do the majority of your compute tasks on a friggin ipad tbqh.

At the high end is an idiot tax all the time. I say this noting that the 32 core details are coming out for 2nd gen threadripper and it appears to be priced at $1800. That's idiot tax right there. I'd buy a second 1950x rig and load it down before I ever thought of spending $1800 on a gimped 32 core.

The high end always comes with idiot tax. In the past, the incremental bump was huge. Now, almost everything has converged. The same holds for cars. You can get a civic with surround sound, leather seats and climate control. They're no longer luxury distinguishment anymore based on such features. If someone really wants to pay for the title of luxury .. By George, a company's going to make that super special model to milk them. If you dont care about an inferno baking your room, by george they're going to send power usage into the stratosphere. An average human being pouts out about 100Watts of heat. So, the idea that it doesn't matter because you have a cooling solution is kind of hilarious. People with rigs consuming 500 watts of power is like having 4 people standing around you while you work. This is the very reason why clocks are so clamped in a data center. No one is running these absurd clocks for professional work because of the diminished value and bankrupting proposition of power consumption/cooling to get there. Server processors are in the 2.xx Ghz to 3.xx Ghz range. These are people maxing out compute on serious workloads. Meanwhile, the hobbyist argues :
I NEED 5GHZ
Idiot tax.
 
Last edited:

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
So you unironically want me to believe a stock 2700x 8 core consumes as much power as a 16core 1950x in Cinebench? Yet consumes its TDP wattage of 105Watts in Prime95?
HAHAHA
I don't need YOU to believe anything, actually. You're free to believe anything you want, or be as condescending as you want, even, it doesn't bother me. It's more for general discussion around the whole TDP and power consumption topic, because people are focusing on the 95W figure as some kind of limit that can't be exceeded...
 
  • Like
Reactions: mikk

PhonakV30

Senior member
Oct 26, 2009
987
378
136
I will wait for reviews but I'm sure socket 1151 can't feed an 8 cores at 4.8ghz or above.We saw number on 7820x.
 
  • Like
Reactions: dark zero
May 11, 2008
19,561
1,195
126
Isn't TDP derived from the base clock? Pretty sure a 4.7GHz 9900K would pull more than 95W under full load - a 8700K already does that. Simple maths suggests it could be a 125W chip as you said, perhaps a bit more since its also running at higher clocks than the 8700K.

WR to performance, if the final clockspeeds are indeed 4.7GHz ACT, I struggle to see how you it wouldn't be significantly faster than a 2700X, which turbos to 4GHz ACT. Intel also enjoys an IPC advantage, so I'd expect it to beat a 2700X by an average of 20% or more in applications.

According to your charts, even a 7820X already beats the 2700X slightly, and that is clocked at the same 4GHz as the 2700X. Compared to the 7820X, a 9900K has a larger cache (16MB vs 11MB), 17.5% higher clocks and is ringbus based, which helps latency sensitive apps and gaming. Speaking of gaming, don't think we will see it perform much better than a 8700K, and most of those gains will probably be due to the higher clocks rather than the extra 2 cores / 4 threads.

According to Intel, this is the explanation for TDP on Intel processors :
Thermal Design Power (TDP) represents the average power, in watts, the processor dissipates when operating at Base Frequency with all cores active under an Intel-defined, high-complexity workload. Refer to Datasheet for thermal solution requirements

Usually that when the turbo is active, the voltage must be increased first before the clock speed increase and thus the power consumption will increase.
This is no different from Ryzen or any other cpu, even my 65W tdp rated A10-6700 consumes a lot more when turboing up. Power consumption will always be higher at turbo clocks.
 

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
According to Intel, this is the explanation for TDP on Intel processors :


Usually that when the turbo is active, the voltage must be increased first before the clock speed increase and thus the power consumption will increase.
This is no different from Ryzen or any other cpu, even my 65W tdp rated A10-6700 consumes a lot more when turboing up. Power consumption will always be higher at turbo clocks.

Which is precisely my point, pointing out the TDP and claiming the clocks are not realistic is actually looking at things the wrong way, because the TDP is derived from the chip running at base clocks, not the max turbo frequency.

I'm certain that the 9900K will be a power hungry chip drawing well in excess of 100W, maybe even 150W, especially under worst case AVX workloads.

As others have said though, all will be revealed soon in reviews...
 
  • Like
Reactions: William Gaatjes

Abwx

Lifer
Apr 2, 2011
10,953
3,472
136
It's an Intel chip. certain people with agendas will use whatever angle to put it in a bad light IMO.

You have such an exemple in our own post but for the competition :


A 2700X draws in excess of 140W under full load and no one mentions it,.

No one mention it because there s no serious review that say so, most i saw to this date is in this review :

https://www.hardware.fr/articles/975-3/consommation-efficacite-energetique.html

If you want the numbers under Prime 95 (10% lower than X264 :

https://www.hardware.fr/articles/974-7/overclocking-pratique.html
 
May 11, 2008
19,561
1,195
126
Which is precisely my point, pointing out the TDP and claiming the clocks are not realistic is actually looking at things the wrong way, because the TDP is derived from the chip running at base clocks, not the max turbo frequency.

I'm certain that the 9900K will be a power hungry chip drawing well in excess of 100W, maybe even 150W, especially under worst case AVX workloads.

As others have said though, all will be revealed soon in reviews...

I agree.
Having a little experience with a power meter at the 230V input, and using cpu torturing benchmarks with the A10-6700 in the past, i learned that a heatsink has to be able to handle at least double the tdp to be efficient for auto overclocking cpus.
because when the cpu turbos up but gets too hot, it will clock down again again to protect itself. For short bursts, a small tdp based cooler is sufficient. For sustained turboing not.
With my power meter at the 230V input, i noticed the 2600 uses a lot more than just 65w tdp with all core turbo during prime 95 torture tests.
I have to do some tests again and write the numbers down in an excel or calc sheet.
My current heatsink that is specified for a tdp of 160W while having a 65w tdp 2600, my fan hardly spins up. The fan remains about 1000 rpm. And my cpu stays at turboclocks nonstop during torture tests while temperature is a max 63C to 64C.
This all confirms for me that even a ryzen 2600 will be enough for me for years because it handles all the games and programs i have easily and for future games it has more than enough grunt.
Not to derail the thread, but this of course also applies to all modern Intel cpus.

TLDR; An overdimensioned cooling solution is a good thing.
Of course, when manually overclocking, a good cooling solution is even more important.
 

Abwx

Lifer
Apr 2, 2011
10,953
3,472
136
I'm not, even The Stilt confirmed this with his own testing. But this is the link to the GN review:
https://www.gamersnexus.net/hwrevie...w-game-streaming-cpu-benchmarks-memory/page-3
22_power-cinebench-nt.png


As I said, double standards... AMD can freely exceed TDP and it's brushed under the carpet...

EDIT - Added the stilts findings in the following link, as the poster above me shows, its incredibly easy to cherry pick data to 'back up' your points.

https://forums.anandtech.com/threads/ryzen-strictly-technical.2500572/page-72



Still think I'm exagerrating? ;)

What happened at this site, because for Cinebench here what Computerbase measured :

https://www.computerbase.de/2018-04/amd-ryzen-2000-test/5/#abschnitt_leistungsaufnahme

147W delta measured at the wall when GN says that it s as much on the ATX connector...

FTR 147 x 0.75 = 110W , that s what is actually drained by the CPU, and to get back OT notice that both 8700 and 8700K use this amount in the Prime 95 test just below the CB test....
 
  • Like
Reactions: lightmanek
May 11, 2008
19,561
1,195
126
Do not forget the efficiency loss in both the PSU and the smps vrm on the mother board. Adding the losses together can easily be around 25% during cpu torture tests. And not forgetting the power draw of the dimms that also need power. CPU power consumption can be as low as 70% of input power during torture tests, roughly estimated.
 
  • Like
Reactions: mikk
May 11, 2008
19,561
1,195
126
Since everything costs power, HT disabled increases dark silicon and if the HT circuitry limits the overclock then a 8C/8T 9700K cpu makes sense.
The 8C/8T might become the ultimate cpu for gamers that overclock their cpu to the maximum.
Anybody ever tested what the max overclock is for an 8700K with HT enabled and again tested with HT disabled.
I am really curious if this idea makes any sense.
And if the clockspeed increase is justifed over having HT disabled.
Which it should in thread limited games where there are less threads than available cores.

edit:
Added 9700K text.
 
Last edited:

The Stilt

Golden Member
Dec 5, 2015
1,709
3,057
106
If the rumored frequencies for the 9900K are accurate, then it should be > 31% faster in ST workloads and > 29% faster in MT workloads on average (in my test suite, including >= 256-bit workloads) than the 2700X.
In Cinebench R15 for example the differences would be significantly less; > 20% higher ST and > 17% higher MT.

This is while expecting that the 2700X maintains 4.05GHz average all-core frequency and 4.35GHz single core frequency, while the 9900K would sustain 4.7GHz all-core frequency and 5GHz single core frequency.

2700X would most likely require ~ 10% less power than the 9900K, on average (MT workloads).
 

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
Since everything costs power, HT disabled increases dark silicon and if the HT circuitry limits the overclock then a 8C/8T 9700K cpu makes sense.
The 8C/8T might become the ultimate cpu for gamers that overclock their cpu to the maximum.
Anybody ever tested what the max overclock is for an 8700K with HT enabled and again tested with HT disabled.
I am really curious if this idea makes any sense.
And if the clockspeed increase is justifed over having HT disabled.
Which it should in thread limited games where there are less threads than available cores.

edit:
Added 9700K text.
There could be some truth to this, though the average 8600K overclock doesn't seem to be any higher than a 8700K.

I might try this on my 8700K when I have some spare time. With HT enabled I can't get 5.1GHz stable, most likely because of my average HSF (CM hyper 212) plus it's not delidded, so temps are the limiting factor. From what I've seen though disabling HT does reduce temps a lot as the cores are not fully utilised, so it could give me the thermal headroom to finally get 5.1GHz stable.
 
May 11, 2008
19,561
1,195
126
There could be some truth to this, though the average 8600K overclock doesn't seem to be any higher than a 8700K.

I might try this on my 8700K when I have some spare time. With HT enabled I can't get 5.1GHz stable, most likely because of my average HSF (CM hyper 212) plus it's not delidded, so temps are the limiting factor. From what I've seen though disabling HT does reduce temps a lot as the cores are not fully utilised, so it could give me the thermal headroom to finally get 5.1GHz stable.

Ah, thank you.
It would be fun if x86 cpu would get the same thread enabling disabling functionality as POWER cpus from IBM have with the accompanying os support.
It seems that the POWER cpu os can turn SMT off and on and set the amount of threads from 1 to 2,4 or 8 on the fly.
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,691
136
What we actually really need is DMI4.0 on intels side and same on AMD. eg. the connection CPU to chipset must become much faster. I would argue 4 lanes of pcie 4 are still too slow but better than no improvement. Or if all the lanes from CPU are pcie4 I could see some good improvements if mobo makers are clever.

Well, AM4 doesn't really need a faster "DMI" link since it already has a dedicated x4 link for NVMe. Of course more I/O is always useful, but at that point (multiple NVMe) I'd have already moved up a notch to TR. The x16+x4+x4(for FCH) is fine for the mainstream socket. If you want, you can always split the x16 into an x8/x8 with an X470/X370 board, so you can use the second slot for an additional NVMe drive, at the cost of some (minimal) graphics performance.

On the Intel side of things, you have the DMI link I/O bottleneck. They could solve that by either doing a DMI 4.0 upgrade or move a wider DMI (x8). Or they could just, gasp, do what AMD has done and have a dedicated x4 link off the CPU. When all I/O is hanging of the x4 DMI 3.0 link to the PCH, it will become a bottleneck sooner or later. I find Intel dealing with this unlikely, since Intel is first and formost focused on mobile devices, desktop being, if not exactly an afterthought, then at least not a priority. This doesn't matter for mobile application, so it's a long way down on the to-do list.
 

RichUK

Lifer
Feb 14, 2005
10,320
672
126
So a 2700x 8 core consumes as much power as a 16core 1950x in Cinebench? Yet consumes its TDP wattage of 105Watts in Prime95?
HAHAHA

And what's the reason for this obvious and absurd discrepancy for every CPU?
Integer pipeline vs. Floating point pipeline... Comp Arch 101
Floating point pipelines typically have 50% more stages than an integer pipeline and when its active all stages are firing. Now who here uses their CPU like a GPU for the majority of their computing workloads crunching floating point OPs? This test became popular because AMD's bulldoozer FPU pipeline was garbage and AMD likes to point out that it isn't any longer. Floating point OPs will wreak havoc on a CPU.

If you want higher cloaks, you're facing ever diminishing returns w.r.t to insanely increased power consumption. Intel nor AMD can beat physics so why would I shell out absurd amounts of money in an attempt to see how diminished the value equation can be? 1700 8 core works just fine and manages to be half the TDP. I can literally buy two 1700s for the price of one 2700x and still beat it on power consumption. Even a 1950x does in cinebench. My overarching point was and will remain : The average person knows nothing about cpu architecture such that they can even begin arguing about the finer details. Most compute hardware is far more than any average person uses. In an age in which 8 core is ubiquitous, I'm not shelling out mountains of money to either AMD or Intel for clocks.

You can do the majority of your compute tasks on a friggin ipad tbqh.

At the high end is an idiot tax all the time. I say this noting that the 32 core details are coming out for 2nd gen threadripper and it appears to be priced at $1800. That's idiot tax right there. I'd buy a second 1950x rig and load it down before I ever thought of spending $1800 on a gimped 32 core.

The high end always comes with idiot tax. In the past, the incremental bump was huge. Now, almost everything has converged. The same holds for cars. You can get a civic with surround sound, leather seats and climate control. They're no longer luxury distinguishment anymore based on such features. If someone really wants to pay for the title of luxury .. By George, a company's going to make that super special model to milk them. If you dont care about an inferno baking your room, by george they're going to send power usage into the stratosphere. An average human being pouts out about 100Watts of heat. So, the idea that it doesn't matter because you have a cooling solution is kind of hilarious. People with rigs consuming 500 watts of power is like having 4 people standing around you while you work. This is the very reason why clocks are so clamped in a data center. No one is running these absurd clocks for professional work because of the diminished value and bankrupting proposition of power consumption/cooling to get there. Server processors are in the 2.xx Ghz to 3.xx Ghz range. These are people maxing out compute on serious workloads. Meanwhile, the hobbyist argues :
I NEED 5GHZ
Idiot tax.

You seem a bit emotional.
 
Aug 11, 2008
10,451
642
126
Then both AMD and Intel are at fault, I'm not exaggerating anything, a 2700X will freely consume 140W under default BIOS settings, and so will a 9900K most likely, or at least 125W based on the power consumption of a 8700K.

Everyone is getting bent out of shape over the 95W TDP when in reality it's a meaningless number for enthusiasts who use anything better than a stock Intel HSF, or you run a low end board with poor VRMs that strictly enforces the 95W power limit.

Since 99.9% of people buying a 9900K would already have a Z370 mobo or buy a Z390, that leaves thermal throttling as the real issue with the 95W TDP if you use a HSF that is only rated to dissipate 95W. Basically any HSF around the $30 mark will be able to deal with this, so again it's really a non issue unless you skimp on the cooling on a $450 CPU

This whole thing is absurd. Holy crap, it is a K model, balls to the wall, overclockable chip. Anybody who is worried about the TDP bought the wrong chip. But obviously, since it has the same number of cores, better ipc, faster stock clocks, and more overclocking headroom, than the 2700x, detractors have to find something to complain about. In reality, power consumption at the same clocks, as Stilt showed in his earlier post, is very close. But the AMD camp has pretty effectively promoted the "false news" that Ryzen is much more efficient than Coffee Lake. The real reason CL can consume a lot of power is that you can actually reach 5ghz clockspeeds, in stark contrast to Ryzen.
 

TheGiant

Senior member
Jun 12, 2017
748
353
106
I am not a CPU a designer but from history it seems rather tough to get a chip from 4 to 5GHz with such a high IPC. It took Intel like 5 years to do so and my personal opinion it is not only about manufacturing process.

If Intel really manages to release a 8C ring bus all around high IPC with all core turbo @ 4,7GHz, its a massive achievement. I hope AMD manages that too with 7nm.

With both I believe it when I see it. Finally a worthy upgrade from my i5-6600k@4.5GHz
 

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
7,408
2,440
146
This is a reminder to stop with the callouts and keep the discussion civil. If callouts continue, infractions will be issued.
 
  • Like
Reactions: whm1974