Intel Core i9-9900K Tested in 3DMark Clocking Up To 5ghz ,faster than Ryzen 2700

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
So, a 6 core has a base clock of 3.7Ghz and runs at 4.3Ghz at boost.
Meanwhile at AMD :
https://www.newegg.com/Product/Product.aspx?Item=N82E16819113499
8 core with a base clock of 3.7Ghz with a max boost of 4.3ghz

The gimmick is because the competition can do the same because its the same physical limitation impacting both AMD/Intel.

Not really.

4.3GHz is the stock all core turbo speed for the 8700K. 4.3GHz is not the stock all core Turbo for Ryzen 2700X.

That is stock speeds. But Intel also leaves a more headroom for overclocking, practically all the 8700K will reach 4.8GHz all core overclock with ease, when barely anyone gets 4.3GHz all core overclock on Ryzen.

It isn't just because of 8 core vs 6core. 6core Ryzen doesn't really overclock any better than 8 core Ryzen.

Intel just has more frequency headroom (at least 700MHz) in their combination of process and architecture (it is always related to both). So we are clearly not dealing with the same physical limitation.

If we want a good model of how 9900K will compare with 2700X, we just need to look a 8700K vs 2600X. I wouldn't expect the delta to change much when moving from, 6c v 6c, to 8c v 8c.
 
Last edited:

ub4ty

Senior member
Jun 21, 2017
749
898
96
Truth is truth.
Having an understanding of comp arch allows one to know there is an integer pipeline and a floating point pipeline in a processor (they're essentially two distinct compute units in a core). A floating point pipeline is rarely used. GPUs were designed for this purpose. The floating point pipeline will use substantially more power. For this reason, it isn't referenced in TDP figures. This is yet another gaff/gimmick conducted by popular reviewers. Referencing IPC (Instructions per clock) is another gimmick as no other analysis or detail is often provided and is required tbqh. Time to process a program and speedups therein is sufficient.

The amount of confusion and lack of clear detail on this matter speaks for itself.

No one can defy physics. The higher the clock the less power efficient.
Professional processors used in the enterprise market shoot for lower clocks, lower power utilization, and support for larger system memory foot prints. The idea is to hit the golden middle ground on power/performance where its most efficient as every inefficient watt matters when you begin scaling to 1000s of nodes. The key also is to put as much as you can in system memory because the bottleneck still is data retrieval and a cpu is doing nothing when it is waiting on the data it needs for computation.

Meanwhile, we have the hotrod desktop market where it seems no on cares about power utilization and wants bragging rights for clock speeds. What i hear here is mixed signals. People claim they are doing professional loads that actually max computer resources but then also argue this is only possible on the least professional hardware configuration. Something obviously is awry.

Distributed computing exists and no one is bound to one computer. If the load is serious enough you can distribute it or pony up and get higher core counts for a task that consumes 8+ cores worth of compute. Surprise : Clocks begin to lower with increased core count. At 8 cores, someone is arguably already in enterprise/server market. At 16 cores, you most definitely are. However, no one seems to behave like that's what is under the hood. Trying to hotrod at these levels comes with huge power inefficiency However, lets ignore this like heat/power utilization are not important.

Someone would rather use two processors worth of CPU power to get a couple hundred more clocks. Meanwhile, memory stall count is soaring. This is where the assertion of professional computing becomes laughable. This is where the consumer focused benchmarking becomes laughable. This is where doing power analysis on floating point operations and referencing TDP figures associated with the integer pipeline on a CPU become laughable. The whole thing is a joke. I recognize an enthusiast element but this clearly conflicts with declarations of professionalism.

No one in the professional market is crying over 2.xx - 3.xx clock speeds. There's something called networking/distributed computing that allows you to scale across nodes if you are in such a dire need to get things done in the blink of an eye. Hell, there's cloud computing on demand. Go for gold with 200 cores worth of compute.

So, what we have left is gaming. I get the enthusiast aspect of high fps at resolutions eyes will not perceive in high action scenes. However, this is not a professional market.

1 or 2 cores clocking up to 5GHz is not going to change the world. My life doesn't become exponentially better because chrome opens up a millisecond less than what it used to.This is why I title such things a gimmick. It's a gimmick when AMD does it and its a gimmick when Intel does it. It's marketing as clearly reflected in the thread title. HEADLINE : 8 core Processor hits 5GHz. Reality fine print : Only on two cores. How do memory stalls scale w/ increase CPU clock speed? No reviewer delves into this.
 

ub4ty

Senior member
Jun 21, 2017
749
898
96
Not really.

4.3GHz is the stock all core turbo speed for the 8700K. 4.3GHz is not the stock all core Turbo for Ryzen 2700X.

That is stock speeds. But Intel also leaves a more headroom for overclocking, practically all the 8700K will reach 4.8GHz all core overclock with ease, when barely anyone gets 4.3GHz all core overclock on Ryzen.

It isn't just because of 8 core vs 6core. 6core Ryzen doesn't really overclock any better than 8 core Ryzen.

Intel just has more frequency headroom (at least 700MHz) in their combination of process and architecture (it is always related to both). So we are clearly not dealing with the same physical limitation.

If we want a good model of how 9900K will compare with 2700X, we just need to look a 8700K vs 2600X. I wouldn't expect the delta to change much when moving from, 6c v 6c, to 8c v 8c.

http://home.ku.edu.tr/comp303/public_html/Lecture15.pdf
Memory stalls
Slide 6 - Performance of a CPU if the clock rate is doubled but the memory speed stays the same
Teaser : It only increases by 40%

Memory stalls/Cache Misses/TLB flushes - Spectre/Meltdown
https://www.extremetech.com/computi...-is-wrong-on-pcs-and-getting-worse-every-year

Clocks are not everything and result in diminishing returns in a multitude of ways the higher up you go. Enthusiast ideology conflicts with reality. There are physical limits at 14nm no matter what special sauce is added. This is why power consumption increases exponentially at the wall 14nm imposes and then you hit outright instability. Intel has done a lot of things including unsafe hacks/cheats in its micro-architecture as revealed by a laundry list of security flaws. Corrections to these have real world consequences on the order of 10-20% reductions in performance.

Memory stalls are real. I have yet to see any reviewer highlight how real they are. Intel even makes it easy w/ profiling tools.

So, the big clock debate at 8core+ . What does 700Mhz really buy you when a competitor already is in 4Ghz territory and memory speeds are the same? If you care so much to get something done fast, does it make sense to increase core count or clocks? And then there's the balls to the walls : I don't care take my money I want higher fps in muh Vidya crowd.

Intel no doubt gets higher clocks. The question for me as an informed consumer is : at what cost?
What do I get in return? What's behind the marketing headlines? What is the real world performance? Why not just go 16 core? Why not distribute the computer across multiple 8 cores?
 
Last edited:

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
http://home.ku.edu.tr/comp303/public_html/Lecture15.pdf
Memory stalls
Slide 6 - Performance of a CPU if the clock rate is doubled but the memory speed stays the same
Teaser : It only increases by 40%

Memory stalls/Cache Misses/TLB flushes - Spectre/Meltdown
https://www.extremetech.com/computi...-is-wrong-on-pcs-and-getting-worse-every-year

Clocks are not everything and result in diminishing returns in a multitude of ways the higher up you go.
Enthusiast ideology conflicts with reality. Intel has done a lot of things including unsafe hacks/cheats in its micro-architecture as revealed by a laundry list of security flaws. Corrections to these have real world consequences on the order of 10-20% reductions in performance.

Memory stalls are real. I have yet to see any reviewer highlight how real they are. Intel even makes it easy w/ profiling tools.

So, clocks.. what does 700Mhz really buy you when a competitor already is in 4Ghz territory and memory speeds are the same? If you care so much to get something done fast, does it make sense to increase core count or clocks? And then there's : Vidya games.

Now you are just shifting goal posts.

Your post I answered, was essentially arguing that clock speed limits are essentially the same between Ryzen and Coffe Lake. When they clearly are not. Intel has an easy 700MHz advantage.

What is a 700MHz advantage when we are already in 4GHz territory? Simple 700/4000 = up to 17.5% faster, depend on workload.

The 9900K will be a premium halo product, and at that end of the market, people simply like to buy the best.

It won't be a case of tradeoff like 8700K vs 2700K, where the 2700K wins a lot of the highly parallel benchmarks, and Intel wins gaming and less threaded.

It will be a case of the 9900K winning everything. A lot of people will pay a $100 premium over 8700K to have that clear win. And the 8700K is already an immensely popular CPU.
 
  • Like
Reactions: epsilon84

ub4ty

Senior member
Jun 21, 2017
749
898
96
Now you are just shifting goal posts.

Your post I answered, was essentially arguing that clock speed limits are essentially the same between Ryzen and Coffe Lake. When they clearly are not.
The goal post has been performance/value across all my posts.
What I have highlighted in my detailed commentary is that there are far more technical details beyond what your average person can appreciate that goes into this equation. What I argued regarding clock speed limits is that both AMD/Intel are restricted by physics. This is exampled by asymptotic power consumption towards the wall 14nm imposes.

I could clock all of my processors both intel/AMD a lot higher but I don't because the power/heat drawbacks vs performance make it senseless. This drawback is imposed by : physics which imposes : https://en.wikipedia.org/wiki/Diminishing_returns

Both AMD/Intel are subject to this. A small shift in the goalposts towards more headroom goes to intel.

Intel has an easy 700MHz advantage.

What is a 700MHz advantage when we are already in 4GHz territory? Simple 700/4000 = up to 17.5% faster, depend on workload.
It isn't that easy as I just detailed in my reference to comp arch 101.There's something called memory stalls. Your CPU spends a notable amount of time waiting on data. On top of this is the reality that intel conducted shenanigans in its branch prediction/look-ahead micro-architecture and now has to do TLB/cache flushes to ensure security compliance. So, I take your simple 700Mhz figure and I add in memory stalls.. that cuts about 50% off. So, 350Mhz. Then I chop off an overall 5-10% due to security patches... depending of course on the workload. Magically you begin to see parity. Something you'd expect for a platform as mature as x86.

The 9900K will be a premium halo product, and at that end of the market, people simply like to buy the best.

It won't be a case of tradeoff like 8700K vs 2700K, where the 2700K wins a lot of the highly parallel benchmarks, and Intel wins gaming and less threaded.

It will be a case of the 9900K winning everything. A lot of people will pay a $100 premium over 8700K to have that clear win. And the 8700K is already an immensely popular CPU.
Vidya gamers indeed want the best and highest performance at any cost.
I can't argue w/ you there. I am thankful they exists. It keeps the prices down in the other segments. Then AMD releases 2800x and you find out the gap shrinks even further. The 2800x will be as senseless to me as the 1800x was as the 9900k is and the 8700k because I'm not a gamer and when I need more performance I go to 16 cores or distribute it across nodes. I'd buy an EPYC processor over a 32 core gimped Threadripper. Interestingly, it often makes sense to skip the chromed out show horse from any company.
 
Last edited:

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
It isn't that easy as I just detailed in my reference to comp arch 101.
There's something called memory stalls. Your CPU spends a notable amount of time waiting on data.
On top of this is the reality that intel conducted shenanigans in its branch prediction micro-architecture and now has to do TLB/cache flushes to ensure security compliance. So, I take your simple 700Mhz figure and I add in memory stalls.. that cuts about 50% off. So, 350Mhz. Then I chop off an overall 5-10% due to security patches... depending of course on the workload. Magically you begin to see parity. Something you'd expect for a platform as mature as x86.

Yes, you did some fine theory-crafting. But practical reality shows that Coffee Lake doesn't lose 50% of the potential benefit of clock speed boosts. It looks a lot closer to linear in many situations. More like 90% of the benefit vs your claimed 50%. As far as the patch overhead, that really doesn't apply to many desktop workloads, it's much more of a Server Impact.
 

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
Theory-crafting lol... AMD would kill for 1Ghz more Fmax in both CPU/GPU business. The way things are right now their products are missing clock versus competition. Vega would have easier time on 2Ghz+ and 5Ghz Ryzen would be easy to recommend.

It does not take a rocket scientist to see that Intel will have 25-35% advantage with 9900K in ALL workloads.
 
  • Like
Reactions: Arachnotronic

ub4ty

Senior member
Jun 21, 2017
749
898
96
Yes, you did some fine theory-crafting. But practical reality shows that Coffee Lake doesn't lose 50% of the potential benefit of clock speed boosts. It looks a lot closer to linear in many situations. More like 90% of the benefit vs your claimed 50%. As far as the patch overhead, that really doesn't apply to many desktop workloads, it's much more of a Server Impact.
My crafting and real-world performance hits are based on heavy multi-threaded workloads with lots of random data access. I think this is classified as Server/professional workloads where you have to marry performance with price/power utilization/heat generation. 8700k and 9900k seems to target highly cacheable straight forward workloads and shines in gaming? The patch matters for loads where you have lots of thrashing branch prediction/look ahead and lots of I/O. Beyond gaming, I don't see the sell. Graphics rendering is dominated by more cores not clocks. So, there goes that.
Microsoft word launching in a millisecond less isn't a selling point.

So, in the real world, the workload you see 90% of the benefit is gaming. In other more professionals tasks, that drops significantly due to memory stalls. More cores and memory lanes beats.
Gaming = enthusiasts.

Ring bus is a dead approach thus why Intel went mesh... This too has hidden gotchas in regards to scaling. It will be hilarious to eventually see intel go to MCM. A last send off for the enthusiasts was in order before they fundamentally reorganize their approach so +2 more cores to ringbus.
 

ub4ty

Senior member
Jun 21, 2017
749
898
96
Theory-crafting lol... AMD would kill for 1Ghz more Fmax in both CPU/GPU business. The way things are right now their products are missing clock versus competition. Vega would have easier time on 2Ghz+ and 5Ghz Ryzen would be easy to recommend.

It does not take a rocket scientist to see that Intel will have 25-35% advantage with 9900K in ALL workloads.
I could clock all of my processors both Intel and AMD a lot higher as well as my GPUs.
I wouldn't kill for any more Ghz because I know power/heat increase exponentially with it.

People with serious compute kill for more cores :
Intel® Xeon® Gold 6148 Processor
  • 27.5 MB L3 Cache
  • 20 Cores
  • 40 Threads
  • 150.0 W Max TDP
  • 2.40 GHz Clock Speed
  • 3.70 GHz Max Turbo Frequency
Not clocks. This reality kills the clock shill.

I dream of PCIE 4.0. Of HMB2 on die CPU. Of NVME speeds doubling and latency getting cut in half. Of new affordable paradigms for sys memory like Optane.

Its always important to upgrade when you get the serious aspects changed.
7nm is a year off as well as big changes. Nothing coming from AMD or intel is exciting until then.
 

TheGiant

Senior member
Jun 12, 2017
748
353
106
seriously, wall of text....

intel 8C if not base then overclockable to 5GHz is coming coolable by "teh random coolerz"

and it is THE cpu of choice

the 8700K -the first 6C CPU with no sidegrades and now it looks like the i9-9900k is the first 8C CPU with no sidegrades (no tradeoff single core perf for multicore)
 
  • Like
Reactions: JoeRambo

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
Not clocks. This reality kills the clock shill.

You have no clue, right? 6148 is actually a very sensible CPU ( i know cause i am involved in buying servers with them ), it has 20 cores and rather nice all core turbo clocks. I think in that range Intel only has one CPU with 18 cores and 3.7Ghz all core turbo clocks, that could be better "at speed".

Speed is very important for quite a few servers, as user service times depend on the speed of CPU.
 

whm1974

Diamond Member
Jul 24, 2016
9,460
1,570
96
So why do you care about TDP so much then? Its just an arbitrary figure, and there isn't even an industry standard for this - AMD and Intel derive their TDPs differently anyway.
BecAUSE i DON'T WANT
You have no clue, right? 6148 is actually a very sensible CPU ( i know cause i am involved in buying servers with them ), it has 20 cores and rather nice all core turbo clocks. I think in that range Intel only has one CPU with 18 cores and 3.7Ghz all core turbo clocks, that could be better "at speed".

Speed is very important for quite a few servers, as user service times depend on the speed of CPU.
The speed of the storage medium and the main memory would be just as a big of as a factor of not more so so then the clockspeed of the CPU used in servers.
 
  • Like
Reactions: ub4ty

Abwx

Lifer
Apr 2, 2011
10,850
3,298
136
It does not take a rocket scientist to see that Intel will have 25-35% advantage with 9900K in ALL workloads.

Wet dreams, litteraly, just in Cinebench it will have trouble reaching 15%, assuming it s clocked at 4.7.
 

ub4ty

Senior member
Jun 21, 2017
749
898
96
seriously, wall of text....

intel 8C if not base then overclockable to 5GHz is coming coolable by "teh random coolerz"

and it is THE cpu of choice

the 8700K -the first 6C CPU with no sidegrades and now it looks like the i9-9900k is the first 8C CPU with no sidegrades (no tradeoff single core perf for multicore)
Performance
I keep hearing this thrown around but all it seems to be related to is Gaming.

You have no clue, right? 6148 is actually a very sensible CPU ( i know cause i am involved in buying servers with them ), it has 20 cores and rather nice all core turbo clocks. I think in that range Intel only has one CPU with 18 cores and 3.7Ghz all core turbo clocks, that could be better "at speed".

Speed is very important for quite a few servers, as user service times depend on the speed of CPU.
I do have a clue which is why I referenced it. It is a sensible CPU. Intel makes it so there's no claim to AMD fan service in my reference to it. The clocks are a base of 2.40Ghz w/ a max boost of 3.7Ghz. The point of reference is to reflect a very costly professional CPU having a Ghz less in clocking in base and boost but having 20 cores to execute across and an amazing 150W TDP. If you are involved in buying servers with them, you can attest to everyone commenting here on the importance of power utilization to performance in a data center. My point of reference is to highlight that professional workloads center on this. Clocks are important but not so important that you skew power utilization to 'enthusiast' levels in search of it. My core point is to highlight price/performance ratios, the diminishing returns of clock speed, and to bring into ultimate question the idea that an enthusiast gaming processor is to be taken seriously as a platform for professional workloads.


BecAUSE i DON'T WANT

The speed of the storage medium and the main memory would be just as a big of as a factor of not more so so then the clockspeed of the CPU used in servers.
And likely far more costlier. My NVME drive costs more than my CPU. This is understandable when you consider it sports a 5 core ARM processor. CPUs don't tell storage or memory : wait, I'm still working. They wait on mem/storage. All new memory centric architectures and technologies are forming along this paradigm. I ask the CPU clock shills to excuse me for being far more excited about this than someone cranking a CPU to 11.

What people believe :
cpubusyidle.png

Reality :
cpubusystalledidle.png


Memory tech is where its at. Any CPU from AMD/Intel is more than capable and performant.

THE cpu of choice
The CPU of choice that typically does the volume sales is the middle tier processor where the most value is present and there is no 'take my money please' tax. The longer x86 has been around, the more transistors have shrunk, the more cores, the less the bleeding edge really matters. The gimmicks have scaled with the convergence of the platform. I try not to get duped by them. I look forward to dropping 7nm zen 2 into the same socket as the first gen.

2019 is the year of choice for the next upgrades in both CPU/GPU.
 
Last edited:

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
Wet dreams, litteraly, just in Cinebench it will have trouble reaching 15%, assuming it s clocked at 4.7.

Try telling that to the stilt, who predicted a 30% advantage to the 9900K based on the projected 4.7GHz - 5.0GHz boost clocks

People need to get over Cinebench as the holy grail of benchmarks, it's actually a bit of an outlier in the grand scheme of things because it halves the IPC deficit of Ryzen compared to CFL when you compare it to the majority of other benchmarks.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
My crafting and real-world performance hits are based on heavy multi-threaded workloads with lots of random data access. I think this is classified as Server/professional workloads where you have to marry performance with price/power utilization/heat generation. 8700k and 9900k seems to target highly cacheable straight forward workloads and shines in gaming? The patch matters for loads where you have lots of thrashing branch prediction/look ahead and lots of I/O. Beyond gaming, I don't see the sell. Graphics rendering is dominated by more cores not clocks. So, there goes that.
Microsoft word launching in a millisecond less isn't a selling point.

So, in the real world, the workload you see 90% of the benefit is gaming. In other more professionals tasks, that drops significantly due to memory stalls. More cores and memory lanes beats.
Gaming = enthusiasts.

Ring bus is a dead approach thus why Intel went mesh... This too has hidden gotchas in regards to scaling. It will be hilarious to eventually see intel go to MCM. A last send off for the enthusiasts was in order before they fundamentally reorganize their approach so +2 more cores to ringbus.

Your theory-crafting is based on your wildly incorrect assumptions and nothing more.

Real world clock scaling show nothing like your predictions:

A bunch of multi-threaded application benchmarks for 8700K at 4GHz:
https://www.techspot.com/article/1616-4ghz-ryzen-2nd-gen-vs-core-8th-gen/page2.html
A bunch of multi-threaded application benchmarks for 8700K at 5.2GHz:
https://www.techspot.com/review/1613-amd-ryzen-2700x-2600x/page2.html

That is a 30% clock speed boost. Let's look at at the clock speed scaling for that boost.

And it results in practical performance boosts in heavily multi-threaded applications of:
Cinebench performance boost of 26%
Handbrake
performance boost of 27%
Blender
performance boost of 28%

Average Clock speed Scaling: 90%

Or exactly the 90% I predicted, and obviously no where near your 50% prediction.

Basically all you have been doing is engaging in unsubstantiated FUD, against Intels upcoming desktop Flagship.
 

Abwx

Lifer
Apr 2, 2011
10,850
3,298
136
Try telling that to the stilt, who predicted a 30% advantage to the 9900K based on the projected 4.7GHz - 5.0GHz boost clocks

With the "good" software even 50% is possible, notice that The Stilt use Caselab s Euler 3D, a software that is known to exageratly favour Intel, and removing the CPU dispatcher will not make it more honnest since there are instances of CPU dispatch that are not controled by this routine, and wich still ask fo Genuine_Intel to take a given code path.


People need to get over Cinebench as the holy grail of benchmarks, it's actually a bit of an outlier in the grand scheme of things because it halves the IPC deficit of Ryzen compared to CFL when you compare it to the majority of other benchmarks.

So if AMD catch up in a software that was for years used as reference to display Intel better IPC then this soft must be branded as outlier, what s next to be irrelevant ?...X264 useless but X265 sometime on point .?..


getgraphimg.php



https://www.hardware.fr/articles/975-9/encodage-video-x264-x265.html

getgraphimg.php
 
  • Like
Reactions: lightmanek

IEC

Elite Member
Super Moderator
Jun 10, 2004
14,323
4,904
136
My i7-8700K sustains 120W+ package power under AVX loads at 4.7GHz all-core with a -50mV undervolt, zero AVX offset. It can be considered a good sample in that respect.

If the all-core turbo for the i9-9900K is truly 4.7GHz, I'd expect 140W+, easily. As good as Intel's 14nm+++(+) process is, you're not getting 2 additional cores for free.

Cmon, that is cherry picking prices. Right now the 2700x is 330.00 on amazon, only about 35% cheaper than the 9900k at 450.00. With the clockspeed and ipc advantage, performance should be close to 20% faster, so the price seems quite reasonable to me.

Edit: guess Epsilon beat me to the post, but great minds think alike (relax joking).

There is no "X" in my post. See the thread title? It has no "X" either. The 2700 is the same silicon as the 2700X and many people opted for slightly lower maximum boost clocks in exchange for better efficiency. It's still $230 after bundle discount at Microcenter, and that includes a decent stock cooler.

97910.png

97911.png


Source: https://www.anandtech.com/show/12625/amd-second-generation-ryzen-7-2700x-2700-ryzen-5-2600x-2600/8

Obviously, the i9-9900K will retain the performance crown (at least, until Zen 2 in 1H2019) but it requires certain sacrifices in perf/W and perf/$. Which is the point that several people are making in this thread.
 
  • Like
Reactions: Ranulf

The Stilt

Golden Member
Dec 5, 2015
1,709
3,057
106
My i7-8700K sustains 120W+ package power under AVX loads at 4.7GHz all-core with a -50mV undervolt, zero AVX offset. It can be considered a good sample in that respect.

If the all-core turbo for the i9-9900K is truly 4.7GHz, I'd expect 140W+, easily. As good as Intel's 14nm+++(+) process is, you're not getting 2 additional cores for free.



There is no "X" in my post. See the thread title? It has no "X" either. The 2700 is the same silicon as the 2700X and many people opted for slightly lower maximum boost clocks in exchange for better efficiency. It's still $230 after bundle discount at Microcenter, and that includes a decent stock cooler.

97910.png

97911.png


Source: https://www.anandtech.com/show/12625/amd-second-generation-ryzen-7-2700x-2700-ryzen-5-2600x-2600/8

Obviously, the i9-9900K will retain the performance crown (at least, until Zen 2 in 1H2019) but it requires certain sacrifices in perf/W and perf/$. Which is the point that several people are making in this thread.

Either there are some issues with the power consumption numbers published by AT and TH, or the silicon variation between the different 2700X specimens is EXTREME.
Both AT & TH state "package power" which makes me believe they didn't actually measure the consumption and used a software reading instead.

Hardware.fr measured 144W from the EPS12V connector while Tweaktown measured 152W (during X264 encoding).
With 83% VRM efficiency (which is typical for the C7H press kit board) that translates to 120W and 126W. I personally measured 127.64W (132W peak) using DCR, during X264 encoding.

getgraphimg.php


8602_45_amd-ryzen-7-2700x-5-2600x-review.png
 

moonbogg

Lifer
Jan 8, 2011
10,635
3,095
136
seriously, wall of text....

intel 8C if not base then overclockable to 5GHz is coming coolable by "teh random coolerz"

and it is THE cpu of choice

the 8700K -the first 6C CPU with no sidegrades and now it looks like the i9-9900k is the first 8C CPU with no sidegrades (no tradeoff single core perf for multicore)

Yeah, and Intel could have done this a long time ago, so to be quite honest, redacted. I'm so pissed at Intel for holding out, I swear I don't think I can get over it. $400 quad cores for a decade, then AMD makes a good chip and now all of a sudden we have EIGHT CORES on the mainstream platform for under $500 and the non HT version for $350-ish? Oh I swear to God they can go to hell.



Please watch your descriptions in the tech forums.
Its was a little over-the-top.


esquared
Anandtech Forum Director
 
Last edited by a moderator:

ub4ty

Senior member
Jun 21, 2017
749
898
96
Your theory-crafting is based on your wildly incorrect assumptions and nothing more.

Real world clock scaling show nothing like your predictions:

A bunch of multi-threaded application benchmarks for 8700K at 4GHz:

A bunch of multi-threaded application benchmarks for 8700K at 5.2GHz:
https://www.techspot.com/review/1613-amd-ryzen-2700x-2600x/page2.html

That is a 30% clock speed boost. Let's look at at the clock speed scaling for that boost.

And it results in practical performance boosts in heavily multi-threaded applications of:
Cinebench performance boost of 26%
Handbrake
performance boost of 27%
Blender
performance boost of 28%

Average Clock speed Scaling: 90%

Or exactly the 90% I predicted, and obviously no where near your 50% prediction.

Basically all you have been doing is engaging in unsubstantiated FUD, against Intels upcoming desktop Flagship.

A highly cacheable, highly parallel straight forward linear increment through memory with little to no branching which has low cache miss rates and thus low memory stalls has 90% clock speed scaling performance. Amazing revelation. It's like someone didn't realize this and create a completely tuned architecture and call it a GPU.

My unsubstantiated FUD has to do with the kind of workloads that involve the other half of computing : What general purpose processors were made for and excel at : Branching and random memory access.
I'll leave it up to you to find out what benchmarks and workloads this refers to. Spoiler : It's the set of benchmarks in which a 1700x beats a 8700k hands down by double digits and clock scaling is more inline with what I stated. It's why server processors chase cores and not clocks. An 8-core processor is a server grade processor in my mind. A desktop processor still idles in the 4-core to 6-core region for 'power users'. Hilariously, a dual core/4 thread processor still handles what a majority of people do on a desktop : browse web/shat post/consume (not produce) media.

GPUs were designed for what you just described for a reason and has an architecture to match. AMD made an architecture that scales from the desktop to the server which excels at the workloads that I conduct which you consider unsubstantiated FUD. No gimmicks no alphabet soup of tuned pipelines. Judging by your confidence level and upvotes, this examples my point about the ridiculous overstatements made by people who buy these processes and the limited knowledge they have of comp arch.. You're totally doing rendering/encoding 24/7. Meanwhile, in the professional world the trend for such workloads is towards GPUs. Ray tracing is also headed towards GPUs. You have a solid point for faming and that's what a 8700k was tuned for. Its best to just say that. I didn't buy an 8 core for gaming. I have a 4core intel processor for that.

Computer architecture evolves constantly at levels beyond the average person's purview. An 8-core processor is a server grade power house. I am thankful AMD designed an affordable server focused chip and dropped it into a desktop socket. It is the opposite of what Intel contends : That they put a desktop processor into a server market and tied them together with glue. The real world performance speaks for itself. Meanwhile, I've tracked the 8700k and several Intel processors from launch hoping and praying Intel got its act together. They didn't. The price rarely drops. Meanwhile $170 8-core 1700 and AMD has moved on to higher pastures into the scaling their far reaching architecture affords them.

P.S - I'm posting from a dual core Intel system at the moment. It's what I use for these kinds of things. It's an i5 running at 2.4Ghz. I don't think life would change if it were a 5Ghz proc...
 
Last edited: