14nm 6th Time Over: Intel Readies 10-core "Comet Lake" Die to Preempt "Zen 2" AM4

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
Of course you forgot to point that it s 143W measured at the 12V ATX rail, before VRMs and routing losses, even at 88% efficency this wouldnt be more than 125W while the 165-185W consumed by the 9900K are measured in the CPU package sensors, if measured at the 12V rail they would amount to 185-210W, roughly...

Anyway quite a methodology of yours here, out of curiosity i checked the power for Cinebench at Computerbase for the 2700X, they measured 147W delta at the main, so how could GN measure as much at the 12V rail with the same software....?

https://www.computerbase.de/2018-06...ition-cpu-test/2/#abschnitt_leistungsaufnahme

I once asked you to point other reviews that would confirm GN numbers, of course no answer, instead yo keep using this case wich is apparently flawed and not even reproducible, if it is could you point another review that say so..?..

https://www.tweaktown.com/reviews/8602/amd-ryzen-7-2700x-5-2600x-review/index10.html

www.techspot.com/amp/review/1613-amd-ryzen-2700x-2600x/page4.html

https://techreport.com/review/33531/amd-ryzen-7-2700x-and-ryzen-5-2600x-cpus-reviewed/8

https://www.guru3d.com/articles-pages/amd-ryzen-7-2700x-review,7.html

https://www.pcper.com/reviews/Proce...ew-Zen-Matures/Power-Consumption-Overclocki-0

As I said, at the very least this demands some scrutiny like the 9900K numbers did. Consistently 40 - 50W higher power draw than the 8700K systems, and 190W - 200W system power draw in CB / Blender when idle is approx 50W for most reviewers.

Edit - of course, I see you questioning Stilts 140W findings as well here: https://forums.anandtech.com/threads/ryzen-strictly-technical.2500572/page-75
 
Last edited:

Thunder 57

Platinum Member
Aug 19, 2007
2,647
3,706
136
It's not about 'who I believe' but the fact that there are numerous results that show the 2700X exceeding 105W. The Stilt also got 140W from his review.

Another one: https://www.guru3d.com/articles-pages/amd-ryzen-7-2700x-review,7.html

199W system power draw in CB when idle is measured at 51W. That leaves 148W delta. From experience only the CPU and RAM is actively engaged during a CB run, and the vast majority of that 148W difference will obviously be from the CPU.

Also https://techreport.com/review/33531/amd-ryzen-7-2700x-and-ryzen-5-2600x-cpus-reviewed/8 showing 195W system power draw in Blender, a full 50W more than the 8700K system.

There are plenty more reviews showing similar findings, which is why I find the lack of scrutiny regarding the 2700X power draw surprising. At the very least with the varied power figures across different reviews it warrants further investigation just like the 9900K did.

The Stilt is certainly reputable but his numbers appear to be an anomaly when you check almost every review. You can't go and try to deduce it using system power either. Under load the CPU will be using more power, the memory, the chipset, you have data moving around which costs power as well. It's not as simple as Load - Idle.

Also, why do you keep mentioning the 8700k? It uses less power already, according to Anandtech:

91808.png


It also benefits from having less chipset power. Intel has been making 14nm chipsets for awhile now. I am pretty sure AMD's original Zen chipsets were still fabbed at 55nm(!). So there is a few more watts. Obviously there are other differences as well.

The question should be, why did it seem that everyone basically agreed with the TDP of the 8700k and the 2700k (except The Stilt, and the gamersnexus review, which Abwx has already pointed out the flaws in), but now with the 9900k people have observed something is different?

It seems that the motherboard manufacturers, or Intel, or both, have decided to change things up and allow this higher PL2 and Tau. I think another reason Intel is facing more scrutiny is because they do not include a HSF. AMD includes one that allows the 2700X to work rather well. Intel would either have to ship a comparable cooler and have benchmarks suffer (if stock cooling was used), or ship it with something more upscale and be criticized for the releasing a power hog a la Pentium 4.
 

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
The Stilt is certainly reputable but his numbers appear to be an anomaly when you check almost every review. You can't go and try to deduce it using system power either. Under load the CPU will be using more power, the memory, the chipset, you have data moving around which costs power as well. It's not as simple as Load - Idle.

Also, why do you keep mentioning the 8700k? It uses less power already, according to Anandtech:

91808.png


It also benefits from having less chipset power. Intel has been making 14nm chipsets for awhile now. I am pretty sure AMD's original Zen chipsets were still fabbed at 55nm(!). So there is a few more watts. Obviously there are other differences as well.

The question should be, why did it seem that everyone basically agreed with the TDP of the 8700k and the 2700k (except The Stilt, and the gamersnexus review, which Abwx has already pointed out the flaws in), but now with the 9900k people have observed something is different?

It seems that the motherboard manufacturers, or Intel, or both, have decided to change things up and allow this higher PL2 and Tau. I think another reason Intel is facing more scrutiny is because they do not include a HSF. AMD includes one that allows the 2700X to work rather well. Intel would either have to ship a comparable cooler and have benchmarks suffer (if stock cooling was used), or ship it with something more upscale and be criticized for the releasing a power hog a la Pentium 4.

I'm sorry but a 40 - 50W delta compared to the 8700K because of a chipset difference?! I'm not buying that. I'm using a 8700K as a point of reference because it has been tested to draw close to its rated 95W TDP under full load. If a 2700X was indeed a '105W' CPU then even accounting for chipset differences the total system power draw shouldnt be more than 15W or at most 20W.
 

Abwx

Lifer
Apr 2, 2011
10,847
3,297
136
www.techspot.com/amp/review/1613-amd-ryzen-2700x-2600x/page4.html

https://techreport.com/review/33531/amd-ryzen-7-2700x-and-ryzen-5-2600x-cpus-reviewed/8

https://www.guru3d.com/articles-pages/amd-ryzen-7-2700x-review,7.html

https://www.pcper.com/reviews/Proce...ew-Zen-Matures/Power-Consumption-Overclocki-0

As I said, at the very least this demands some scrutiny like the 9900K numbers did. Consistently 40 - 50W higher power draw than the 8700K systems, and 190W - 200W system power draw in CB / Blender when idle is approx 50W for most reviewers.

Edit - of course, I see you questioning Stilts 140W findings as well here: https://forums.anandtech.com/threads/ryzen-strictly-technical.2500572/page-75

That must be a case of cognitive dissonance at work here, because all the reviews you linked contradict your sayings.

A CPU that would consume 140W would imply 175-180W delta at the main, here the power comsumptions of the whole systems are around 188-205W depending of the review...

As for the Stilt "findings" all i can say is that the biggest number was measured by Hardware.fr in X264 and that the CPU used more power than in their Prime 95 test, they measured less than 140W, actually they got 144W at the 12V ATX rail, wich yield something like 126W at the CPU level.
 
Last edited:
  • Like
Reactions: Despoiler

TheGiant

Senior member
Jun 12, 2017
748
353
106
It's not about 'who I believe' but the fact that there are numerous results that show the 2700X exceeding 105W. The Stilt also got 140W from his review.

Another one: https://www.guru3d.com/articles-pages/amd-ryzen-7-2700x-review,7.html

199W system power draw in CB when idle is measured at 51W. That leaves 148W delta. From experience only the CPU and RAM is actively engaged during a CB run, and the vast majority of that 148W difference will obviously be from the CPU.

Also https://techreport.com/review/33531/amd-ryzen-7-2700x-and-ryzen-5-2600x-cpus-reviewed/8 showing 195W system power draw in Blender, a full 50W more than the 8700K system.

There are plenty more reviews showing similar findings, which is why I find the lack of scrutiny regarding the 2700X power draw surprising. At the very least with the varied power figures across different reviews it warrants further investigation just like the 9900K did.
that is exactly what I am posting for a while and IMO there are 2 basic points in the 9900K media madness

first is I cannot say otherwise pure incompetence (I still believe it is not bias) of reviewers of 9900K at 95W
  • for ages discussion around the web TDP is not power draw
  • for ages AMD's approach to TDP is different than Intel's, yet we are comparing the numbers
  • suddenly the reviews change and transform TDP to absolute power draw
  • yet, they don't consider the real approach, either the people care about power (electricity bill, room overheating, loud cooling) or the care less or they don't at all, so considering power means system, not CPU power draw
  • then, they limit 9900K to 95W, while other( 8700K and other Intel CPUs like SKL-X, and AMD especially and in focus 2700X as the so called direct competitor 8/16T ) are "unlimited"
  • for me, this is making the 2700x in my eyes much worse than it really is and the achievement made by AMD with their budget (which is excellent)
  • like others pointed out, with the never endind competition battle it looks very bad when even sites like techreport say with they review, that 9900K limited to 95W is just % faster than 2700X, but unlimited and displaying it draws 30W more power while running handbrake and also displaying that 2600X draws the same power as the 95W limited 9900K while the performance is not comparable - I mean really WTF ? seriously this is on the internet in this age?
  • for me this is a lack of competence and I won't order any review from the sites when comparing products with those methodics
Second point of incompetence
  • reviews making limit of the top model to 95W like limiting the 8 cylinder engine to 10l/100km and saying whoops its barely faster than...
  • incompetence level 2- reviews should target segment of the customers
So the reviews are not targeting specs (intels base clock power draw....), it is just pure combination of nothing.
I wonder, what methodics will be good when reviewing zen 2 7nm when it comes out.
 
  • Like
Reactions: Zucker2k

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
That must be a case of cogniive dissonance at work here, because all the reviews you linked contradict your sayings.

A CPU that would consume 140W would imply 175-180W delta at the main, here the power comsumptions of the whole systems are around 188-205W depending of the review...

As for the Stilt "findings" all i can say is that the biggest number was measured by Hardware.fr in X264 and that the CPU used more power than in their Prime 95 test, they measured less than 140W, actually they got 144W at the 12V ATX rail, wich yield something like 126W at the CPU level.
https://www.tweaktown.com/reviews/8602/amd-ryzen-7-2700x-5-2600x-review/index10.html

8602_45_amd-ryzen-7-2700x-5-2600x-review.png


No need to use personal insults either. Please just look at the data.

FWIW, I'm not trying to make the 2700X 'look bad' but am pointing out that both Intel and AMD are guilty of their CPUs being able to far exceed the rated TDP.
 
Last edited:

ZGR

Platinum Member
Oct 26, 2012
2,052
656
136
I would argue this is more of a motherboard configuration issue with unlocked chips than with Intel and AMD. But yeah, they are encouraging it far too much.

It is unfortunate that graph above doesn't have the 2700. That graph has only 1 unlocked chip from Intel, and no non-X chips from AMD. The 1700 iirc was pretty good at maintaining 65W at stock. The 2700 is also a 65w chip.

I would love to see AMD's 45w 8c16t in some benchmarks. It is pretty crazy how much performance the i9-9900k loses when capped at 95w!

Personally, I don't care about power consumption as long as my air cooler can handle it.

I do agree with what Ian says about TDP:

So is TDP Pointless? Yes, But There is a Solution
If you believe that TDP is the peak power draw of the processor under default scenarios, then yes, TDP is pointless, and technically it has been for generations. However under the miasma of a decade of quad core processors, most parts didn’t even reach the TDP rating even under full load – it wasn’t until we started getting higher core count parts, at the same or higher frequency, where it started becoming an issue.
But fear not, there is a solution. Or at least I want to offer one to both Intel and AMD, to see if they will take me up on the offer. The solution here is to offer two TDP ratings: a TDP and a TDP-Peak. In Intel lingo, this is PL1 and PL2, but basically the TDP-Peak takes into account the ‘all-core’ turbo. It doesn’t have to be covered under warranty (because as of right now, turbo is not), but it should be an indication for the nature of the cooling that a user needs to purchase if they want the best performance. Otherwise it’s a case of fumbling in the dark.

https://www.anandtech.com/show/13400/intel-9th-gen-core-i9-9900k-i7-9700k-i5-9600k-review/21

Looking at peak power consumption:
https://www.anandtech.com/bench/CPU-2019/2194
 

tamz_msc

Diamond Member
Jan 5, 2017
3,710
3,554
136
Comparing power consumption figures as reported by different outlets with the numbers reported by Anandtech is completely pointless because of the difference in methodology used to arrive at those numbers.

Any number that is an actual measurement at the wall or the 12V EPS is not comparable with Anandtech's numbers which are a result of queries from the CPU's internal registers.

There is no reason to believe that the "measurements" are equivalent in both cases.
 

Abwx

Lifer
Apr 2, 2011
10,847
3,297
136
https://www.tweaktown.com/reviews/8602/amd-ryzen-7-2700x-5-2600x-review/index10.html

8602_45_amd-ryzen-7-2700x-5-2600x-review.png


No need to use personal insults either. Please just look at the data.

FWIW, I'm not trying to make the 2700X 'look bad' but am pointing out that both Intel and AMD are guilty of their CPUs being able to far exceed the rated TDP.

Dunno how they got 152W at the ATX, but one thing is that this is the biggest number so far and that no other review has such a power, so you had to take the most dubbious number as holy grail..

Besides If it was really out of the box then the 8700 would consume as much as the 8700K, to the point that their honesty or competence can be questionned..

https://www.hardware.fr/articles/975-4/core-i7-8700.html

https://www.computerbase.de/2017-11/intel-core-i7-8700-i5-8600k-test-auto-oc-ddr4/3/

Tweaktown the well named know for sure how to measure power drains, the right way to keep people "relevantly" informed...

That being said the 2700X can exceed the rated 105W TDP by 15-20W but that s not comparable with Intel fairy taled TDPs where a 95W rating can get up to 200ish to please the benchmarks.

Notice that the "65W" 8700 get up to 105W in Prime 95 and 90W in X264 while a 8700T is at 65W stock, this latter case is more concerning since it can be used by unsuspecting users in mini boxes with integrated DC/DC converters that are generaly in the 90-130W range.
 
  • Like
Reactions: ZGR

PotatoWithEarsOnSide

Senior member
Feb 23, 2017
664
701
106
Key comment in the link provided by @epsilon84 "remember that this is more a factor of the BIOS and the board vendor." Its right there immediately after the power figures he quoted.
 

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
Dunno how they got 152W at the ATX, but one thing is that this is the biggest number so far and that no other review has such a power, so you had to take the most dubbious number as holy grail..

Besides If it was really out of the box then the 8700 would consume as much as the 8700K, to the point that their honesty or competence can be questionned..

https://www.hardware.fr/articles/975-4/core-i7-8700.html

https://www.computerbase.de/2017-11/intel-core-i7-8700-i5-8600k-test-auto-oc-ddr4/3/

Tweaktown the well named know for sure how to measure power drains, the right way to keep people "relevantly" informed...

That being said the 2700X can exceed the rated 105W TDP by 15-20W but that s not comparable with Intel fairy taled TDPs where a 95W rating can get up to 200ish to please the benchmarks.

Notice that the "65W" 8700 get up to 105W in Prime 95 and 90W in X264 while a 8700T is at 65W stock, this latter case is more concerning since it can be used by unsuspecting users in mini boxes with integrated DC/DC converters that are generaly in the 90-130W range.

Care to show me an actual 'benchmark' where the 9900K can draw 200W? I'm not talking about heavy AVX2 torture tests like AIDA64 or Prime95 small FFTs, but an actual benchmark that pegs the 9900K at full load with 200W?

RE: the TT results, well since you so conveniently dismissed the links I provided earlier showing total system consumption to be 40 - 50W higher than the 8700K, that was one I could find that isolated the CPU power. Of course, I'm not surprised that your first reaction is to question their results, just like you questioned Stilts results, or GNs...

Hey, at least you agree that AMD can also exceed it's own TDP ratings. Does the blame fall to AMD here, or are mobo makers also ignoring guidelines ala the 9900K?
 

coercitiv

Diamond Member
Jan 24, 2014
6,151
11,686
136
FWIW, I'm not trying to make the 2700X 'look bad' but am pointing out that both Intel and AMD are guilty of their CPUs being able to far exceed the rated TDP.
We had this discussion before, AMD is doing exactly what Intel is doing. I posted this video from Buildzoid before, this time I think I'd better post a small transcript too.

Comparison between MSI X470 Gaming Plus and Asus Crosshair VII Hero with the same 2700X chip installed:
  • depending on the motherboard you stick it in, the same 2700X will run at different clocks (both ST and MT loads)
  • running Intel burn test the MSI board keeps the CPU at 3.8Ghz, the Asus board maintains 4Ghz
  • MSI respects TDP, Asus board violates TDP, pulling 20 - 50W more from the 12V rail
The Stilt also signaled this on the forums, people acknowledged the issue, but just like the 8700K which also ran without power limits on most high-end boards, neither of these CPUs could push the limits to the extent the 9900K did, because they were frequency limited in their stock configuration. (Ryzen was also pretty close to a freqeuncy wall altogether)

I expect we're in for a long ride until mobo manufacturers go back to the old and healthy habit of actually giving a damn about stock TDP on Auto settings.
 
Last edited:

TheGiant

Senior member
Jun 12, 2017
748
353
106
We had this discussion before, AMD is doing exactly what Intel is doing. I posted this video from Buildzoid before, this time I think I'd better post a small transcript too.

Comparison between MSI X470 Gaming Plus and Asus Crosshair VII Hero with the same 2700X chip installed:
  • depending on the motherboard you stick it in, the same 2700X will run at different clocks (both ST and MT loads)
  • running Intel burn test the MSI board keeps the CPU at 3.8Ghz, the Asus board maintains 4Ghz
  • MSI respects TDP, Asus board violates TDP, pulling 20 - 50W more from the 12V rail than the MSI board
The Stilt also signaled this on the forums, people acknowledged the issue, but just like the 8700K which also ran without power limits on most high-end boards, neither of these CPUs could push the limits to the extent the 9900K did, because they were frequency limited in their stock configuration. (Ryzen was also pretty close to a freqeuncy wall altogether)

I expect we're in for a long ride until mobo manufacturers go back to the old and healthy habit of actually giving a damn about stock TDP on Auto settings.
I think no one disagrees with you and the results here.

What in this situation with 9900K is different, that non technical users get the signal
  • you burn the room without water cooling
  • following "spec" its barely faster than 2700X
both are wrong and the technical review staff makes the other CPUs (which violate the "spec" with the help of the respective mobo and bios) look better
if the review sites used the same principles for other CPUs setup, the signal will be very different and that is the main point
I call it incompetence and it reminds me of the politics media and the world news level of "truth"
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
9600K Power Scaling
As we have seen lately, a lot of YouTubers that seemingly have no knowledge of how things are done in the enthusiast motherboard world have forced ASUS to make a change here. It seems many have found something to direct a new manufactured outrage at in order to get clicks. News Flash: Enthusiast motherboards often do not follow TDP specifications for Intel CPUs. It has been this way for years...and years.

TDP is specified as 95 watts for the 9600K. TPD is defined by Intel for its CPUs as, "Thermal Design Power (TDP) represents the average power, in watts, the processor dissipates when operating at Base Frequency with all cores active under an Intel-defined, high-complexity workload. Refer to Datasheet for thermal solution requirements." Let's see how much power the 9600K uses under our three EFI settings.
https://images.hardocp.com/images/articles/1543419040ijyxccpnaw_5_2.pngur 9600K with no MultiCore Enhancement turned on (however set to Auto), stays right at Intel defined TDP specification, as should be expected by anyone with enthusiast motherboard knowledge. Turning on MultiCore Enhancement of course keeps our clocks at 4.6GHz, and we see our power usage flattens out at about ~130W. With our overclock set to 5GHz, we see what is very linear scaling across our thread load peeking at ~200W. This is however under Prime95, and while not a synthetic workload, it is one that you could consider highly irregular. Under a HandBrake encode load, we saw CPU Package Power of 133W at 5GHz.
https://m.hardocp.com/article/2018/11/29/intel_core_i59600k_processor_overclocking_review/5
 

Vattila

Senior member
Oct 22, 2004
799
1,351
136
Intel needs to find every guy in upper management, who voted against backporting IceLake to 14nm and fire them immediately.

Well, I suspect Ice Lake's major features are dependent on 10nm benefits — bigger transistor budget (density) and lower power — focussed on AVX innovations in particular, with execution units, bigger caches, wider data paths and deeper buffers for these wide vector registers, and with the power efficiency needed to run AVX workloads at decent frequencies.

In other words, a backported 14nm Ice Lake would probably have to scrap or relax many of these enhancements, and thus end up pretty much like another, not very interesting, Skylake refresh.

Note that AVX enhancements are indeed planned for Cascade Lake and Cooper Lake server CPUs, and considering these Skylake refreshes were not on the original roadmaps, I guess we can call these enhancements backported Ice Lake features.

Incidentally, the AVX innovation focus may be why Intel's 10nm is so aggressive on density (100.8 million transistors per square millimetre vs 37.5 MTr/mm² for 14nm, or 2x other foundries' 10nm) and power (0.55x vs 14nm). See Intel's February 2017 Technology and Manufacturing Day presentation (source). Perhaps they needed these 10nm process targets for their AVX innovation plans.

Since then, Intel has committed to GPU for parallel compute, and hence no longer solely depends on AVX innovation to compete in this area.
 
Last edited:

DrMrLordX

Lifer
Apr 27, 2000
21,582
10,785
136
Well, I suspect Ice Lake's major features are dependent on 10nm benefits — bigger transistor budget (density) and lower power — focussed on AVX innovations in particular, with execution units, bigger caches, wider data paths and deeper buffers for these wide vector registers, and with the power efficiency needed to run AVX workloads at decent frequencies.

In other words, a backported 14nm Ice Lake would probably have to scrap or relax many of these enhancements, and thus end up pretty much like another, not very interesting, Skylake refresh.

8700k die isn't that big. Instead of "relaxing enhancements" they could have just made 6c/12t the top-end of the client IceLake on 14nm. Instead we are getting 10c/20t Skylake?

Note that AVX enhancements are indeed planned for Cascade Lake

Which Cascade Lake are we talking about here? Cascade Lake-AP is already out there . . .
 
  • Like
Reactions: coercitiv

Vattila

Senior member
Oct 22, 2004
799
1,351
136
8700k die isn't that big.

I wasn't thinking so much about the total die size, but what you can do within a single core. I presume you can have bigger and wider register files, caches, buffers and data paths on a smaller process, which aren't feasible on a larger process due to trade-offs in frequency, latency and power.

Which Cascade Lake are we talking about here? Cascade Lake-AP is already out there

I was thinking about the AVX enhancements in the server parts, yes, as evidence for Intel's AVX focus, which I understand is the major enhancement for Ice Lake, apart from improved iGPU and memory support (ref. Wikipedia and Wikichip).
 
Last edited:

coercitiv

Diamond Member
Jan 24, 2014
6,151
11,686
136
I wasn't thinking so much about the total die size, but what you can do within a single core. I presume you can have bigger and wider register files, caches, buffers and data paths on a smaller process, which aren't feasible on a larger process due to trade-offs in frequency, latency and power.
This makes little sense in the context the transition from 8C to 10C on the current architecture as alternative to 8C CoffeeLake to 8C IceLake: the increase in core count is already a big trade-off in frequency, latency and power. We're talking about a ~25% increase in power at ISO clocks and a ringbus that needs to pay a power / performance penalty to accommodate more cores.

(Lost) timing is the more likely culprit we're not seeing ICL on 14nm. They crossed the Rubicon with that high risk / high reward initial decision, now they have to ride the storm with a mediocre backup plan.
 

Vattila

Senior member
Oct 22, 2004
799
1,351
136
This makes little sense in the context the transition from 8C to 10C

Well, I am no CPU architect, but I just presume that with smaller and more efficient transistors, there is opportunity to implement things in the core that can increase IPC. These things constitute features that you cannot backport to a bigger process, without slowing down the core or making the core less power-efficient.

What kind of features, assumed to be in Ice Lake, would it be interesting to backport?
 
Last edited:

Zucker2k

Golden Member
Feb 15, 2006
1,810
1,159
136
This makes little sense in the context the transition from 8C to 10C on the current architecture as alternative to 8C CoffeeLake to 8C IceLake: the increase in core count is already a big trade-off in frequency, latency and power. We're talking about a ~25% increase in power at ISO clocks and a ringbus that needs to pay a power / performance penalty to accommodate more cores.

(Lost) timing is the more likely culprit we're not seeing ICL on 14nm. They crossed the Rubicon with that high risk / high reward initial decision, now they have to ride the storm with a mediocre backup plan.
How so? Intel can simply incorporate the 9900k core to frequency profile into the new 10 core chip. With an increased power budget, they might even be a little bit more aggressive on that front. Look at the 2950x from AMD as an example, which can boost to 4.4Ghz, etc.
 

coercitiv

Diamond Member
Jan 24, 2014
6,151
11,686
136
Well, I am no CPU architect, but I just presume that with smaller and more efficient transistors, there is opportunity to implement things in the core that can increase IPC.
Intel historically implemented "big" arch changes in the relative safety of testing new process first with a conservative upgrade. Presuming their next arch is something that demands specific area and power improvement in order to function (as an upgrade) at all paints an even more reckless and desperate Intel than I would like to imagine.

It's as if until 14nm they had a nice tick / tock thing going, after which they decided to enter Icarus YOLO mode and reach for the Sun or die trying. I don't buy it.

How so? Intel can simply incorporate the 9900k core to frequency profile into the new 10 core chip. With an increased power budget, they might even be a little bit more aggressive on that front.
And an improved architecture would better the 9900K with no need of increased power draw. Unless you think Skylake is the epitome of x86 performance and efficiency, in which case we'll all be rocking iPhone CPUs in a few years time.

This is not about Intel vs. AMD or some other petty forum brawl. This is about Intel essentially recycling their Skylake arch for the past 3 ½ years (and having multiple teams developing better designs during that timeframe). It's Intel vs. Intel if you will.