Discussion Intel - the cost of BACKPORTING

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Kocicak

Senior member
Jan 17, 2019
982
973
136
Seeing the preliminary results in the Anandtech 11700K review, the performance of this CPU compared to the previous CPU overall seems to be just marginally better, in some cases even worse than the previous generation product.

What is the economic sense and impact on the company of the decision to backport this new CPU to the 14 nm technology? How much money does this backport actually cost? Was this cost really worth it, when the only result is that you will have some "fill in product" for a few months before the Alder lake comes?

If it does not make sense financialy, then why did they do it? They must have known well that the end result of this will be.
 
  • Like
Reactions: krumme

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
Sort of agree on Haswell, but that was a great CPU for mobile.

Also Skylake does quite a bit better on high memory speeds. The newer memory standards need to be a speedgrade higher to be equal to the older generation. So DDR2-533 = DDR-400, DDR3-1066 = DDR2-800, etc.

Skylake's HD 530 was actually a nice advancement over the one in 4790K. Yea sure it wasn't GT3e in 5775C, but a solid 25% faster
Well Haswell is kind of the prime example of Intel's design decisions that lead to where we are which was that all their desktop chips had to share silicon with mobile and that got priority. Same thing with Skylakes graphics increase. That wasn't better developed as much as a decision that laptops and desktops had enough CPU performance both in core count and overall performance to they dedicated more die space to graphics. Hell if they didn't need the die to be larger to fit on all the connection points, graphics wouldn't have seen an increase either.

What it means is that we know Intel specifically was avoiding doing anything to complicated or expensive in their core development during this period, instead putting the effort into efficiency for mobile, better IMCs, better graphics (mostly at Apple's behest).

Still holds though just because we know they were holding back doesn't mean we know that they can keep up with the generational steps AMD has been making. Intel is the one that now has a point to prove.
 
  • Like
Reactions: Tlh97

ondma

Platinum Member
Mar 18, 2018
2,721
1,281
136
A modern browser alone can make good use of a quad core with multi-threading. You have a very poor understanding on what type of resources people can end up using just doing office work at home, let alone in professional environments.
I have 2 laptops with dual cores, and they both are perfectly adequate for browsing, e-mail, social media and online banking/shopping. (You know, the kind of thing most people do with their laptops.)
 
  • Like
Reactions: Magic Carpet

coercitiv

Diamond Member
Jan 24, 2014
6,201
11,903
136
I have 2 laptops with dual cores, and they both are perfectly adequate for browsing, e-mail, social media and online banking/shopping. (You know, the kind of thing most people do with their laptops.)
And I have laptops with a dual core and a quad core, and also desktops with 4c/4t, 6c/6t, 6c/12t and I can see how browsing and other office duties scale with more threads. All have fast SSDs, all have 16GB of RAM. On the dual-core laptops you need to keep the number of concurrent tasks to a minimum to ensure adequate performance - which usually means doing only one thing at a time. All it takes is a simple video call and browsing performance on a JS heavy website takes a dive.

The Chrome browser alone was able to use 4-5 cores with bursts of up to 8 cores to render a webpage in 2015. It's been 6 years since then, and we're talking just about rendering a webpage... no other background tasks. No calls, no cloud sync, no other browser tabs, no AV, music, chat, mail client, OS updates or housekeeping.
 

Kocicak

Senior member
Jan 17, 2019
982
973
136
I chose not to react to the multithreading nonsense, which out of topic here. I wish you stopped talking about this.

BTW I cannot believe that TheELF was serious when he posted about the integrated graphics and the multithreading. He had some other agenda. Will not speculate about this, because my knowledge of psychology is not sufficient.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
I chose not to react to the multithreading nonsense, which out of topic here. I wish you stopped talking about this.

BTW I cannot believe that TheELF was serious when he posted about the integrated graphics and the multithreading. He had some other agenda. Will not speculate about this, because my knowledge of psychology is not sufficient.

He is serious. To the point he believes his points wholeheartedly. I have seen some long term posters here let their Bias shinethrough long after they should but TheELF is like one out of 5 people that I have ignored/blocked because it goes beyond a bias. If that is the only card left he feels he has left to play, he will play it.
 

jpiniero

Lifer
Oct 1, 2010
14,599
5,218
136
It's ironic to be told this given your history of claims on this forum.

I'm inclined to agree that Intel won't do Big Cores elsewhere unless they were ready to dump the fabs. It'd be pretty interesting if they did though.

Of course we don't really know how serious Intel is about spending that $20B and they aren't just trolling Biden for money...
 

lyonwonder

Member
Dec 29, 2018
31
23
81
I wonder if Intel would have been better off shrinking Skylake to 10nm for the desktop instead of backporting Cove to 14nm? A 10nm skylake probably would have enough die space for a 12 core i9 and would better compete with AMD's 12-16 core Ryzen offerings. Though I guess it didn't cross Intel's mind since they have enough problems with 10nm as it is.
 

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
Heh. Seems like I was somewhat more accurate than you were on Rocketlake performance and power consumption.

No, you weren't. Ignoring how you claimed it would never exist as it does, you claimed it e.g. wouldn't have lower TDP CPUs at all. Pretty much any time you made a specific claim it was wrong.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
Them reworking the core is a direct answer to the calculated loss in clock speed. Skylake at 10nm probably performed about -5% compared to Zen 2. They might not be able to compete with multitasking prowess but they are only slightly sub par in in single and low core usage. Honestly while outside availability I don't think their is a good reason to go gen 11, at least they are within reasonable limits on a few select use cases.

Besides the fact that yields my still be crap, skylake even with more cores on 10nm would have bombed. Probably the single worst launch for them since Prescott, maybe even Willamette.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
A 10nm skylake probably would have enough die space for a 12 core i9 and would better compete with AMD's 12-16 core Ryzen offerings. Though I guess it didn't cross Intel's mind since they have enough problems with 10nm as it is.

So... Cannonlake? Cause that's why Cannonlake is. Skylake with small changes and using 10nm.

What it means is that we know Intel specifically was avoiding doing anything to complicated or expensive in their core development during this period, instead putting the effort into efficiency for mobile, better IMCs, better graphics (mostly at Apple's behest).

Does Rocketlake fare better than Haswell and Skylake did? I don't think so.

Besides they NEEDED some of the gains. Thanks to Haswell we have laptops that last 8-10 hours now. Without that effort we would have been in the 5-7 hours range. How much worse would it have looked compared to ARM? Sure ARM laptops do better, but the 8-10 hour range is acceptable that you start worrying about other things, such as performance and compatibility.

What about graphics? What if they stayed at the shoddy GMA X3000 level of performance and support? That chip had programmable vertex/pixel shaders but hardware T&L performance that was below their own Netburst chips. So the team refused to accept that the time has passed when you can offload graphics duties to CPU just to sell few more of them.

Apple and ilk may have accelerated some of the transition but it was inevitable.

Actually I think people would have been more accepting if they didn't stay at 4 cores for so long. Quad cores came with Kentsfield in 2007. Actually they announced quad core chips very shortly after Conroe. Then they stayed with 4 cores all the way until Coffeelake in 2018. 11 years and 4 process generations with the same quad core config! Early as Ivy Bridge they should have went with 6 cores.
 

dmens

Platinum Member
Mar 18, 2005
2,271
917
136
No, you weren't. Ignoring how you claimed it would never exist as it does, you claimed it e.g. wouldn't have lower TDP CPUs at all. Pretty much any time you made a specific claim it was wrong.

Stop lying. I said RKL would be less power efficient than its predecessor. I never said anything about not having lower TDP parts. Of course they can have both high and low power SKU. All they have to do is lower performance accordingly. This is standard binning strategy.

If you are going to make stuff up, at least try to fabricate something plausible.
 

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
Stop lying. I said RKL would be less power efficient than its predecessor. I never said anything about not having lower TDP parts. Of course they can have both high and low power SKU. All they have to do is lower performance accordingly. This is standard binning strategy.

If you are going to make stuff up, at least try to fabricate something plausible.

You claimed the exact opposite previously. At least it's now apparent that you're just here to troll, not discuss in good faith.
 

coercitiv

Diamond Member
Jan 24, 2014
6,201
11,903
136
Early as Ivy Bridge they should have went with 6 cores.
It would have been fine even with Skylake. Focus on mobile experience with Haswell, focus on 14nm transition with Broadwell, increase core count with Skylake.

Nevertheless you're right in absolute terms, based on what I remember about testing with my Haswell 4c/8t, there was definitely room for 2 more cores when looking at max clocks around 45W PL1. The 15-25W parts could have used 4c/8t as well.
 
  • Like
Reactions: Tlh97

Magic Carpet

Diamond Member
Oct 2, 2011
3,477
231
106
Nevertheless you're right in absolute terms, based on what I remember about testing with my Haswell 4c/8t, there was definitely room for 2 more cores when looking at max clocks around 45W PL1. The 15-25W parts could have used 4c/8t as well.
Yeah. 4c/8c at 3.1 GHz had 45w tdp. Haswell consumer platform could had 8 cores at 95w tdp 6 years ago. But, I suspect the lack of apps taking advantage of more than 4 cores was the factor. That, and Intel didn’t want to sacrifice ST too much. But the extra L3 cache did help in certain games. Yet, the design of motherboards would have likely been more expensive. In fact, I’m glad that Intel stuck to only 4 cores back then. The power efficiency of the whole platform was great because of that decision.

6 years later and HEDT 8 core Haswell (5960x) isn’t much faster it’s 4 core sibling in Doom Eternal. But of course, extra cores would have helped to do more things at once. Today’s Ryzen 5950x has it all, though: ST/MT speed and power efficiency. Well, minus backup gpu.
 

Attachments

  • 6B80093E-8C50-44AE-98A6-B952D8032EDC.png
    6B80093E-8C50-44AE-98A6-B952D8032EDC.png
    513 KB · Views: 20
Last edited:
  • Like
Reactions: Tlh97

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
It would have been fine even with Skylake. Focus on mobile experience with Haswell, focus on 14nm transition with Broadwell, increase core count with Skylake.

I am talking desktops here. In Haswell it was the U series that had the advanced power management. So every chip above that could have used a core count boost.

Having extra cores does impact idle power. So I was thinking:

Haswell: 6 core Desktop
Skylake: 8 core Desktop, 4 core Laptop

Of course we know they had 8 core desktop in the works with Cannonlake.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
Does Rocketlake fare better than Haswell and Skylake did? I don't think so.
Well to some degree it does. IPC gains are up a decent amount. In the end it equals out. How much of this is to make up for 10nm's clock deficiancies, how much of it is because of how near perfect their 14nm process has become, how much of that is them actually avoiding on moving the needle forward. But it is finally a substantial increase in IPC.

Besides they NEEDED some of the gains. Thanks to Haswell we have laptops that last 8-10 hours now. Without that effort we would have been in the 5-7 hours range. How much worse would it have looked compared to ARM? Sure ARM laptops do better, but the 8-10 hour range is acceptable that you start worrying about other things, such as performance and compatibility.
Now don't get me wrong the needs were needed and Ryzen has been an example on how you can still take advantage of power efficiency on a Desktop chip. But they sacrificed a whole desktop generation to solidify using their notebook silicon for desktops. Lets not forget that Nahelem started out on what became the HDET platform. At some point Intel was thinking that they could have a desktop lineup and mobile lineup. When they got Clarksdale and Lynnfield going for mobile they decided they could make it a desktop chip as well and the seed was planted. They went HDET for any desktop dedicated chip as a base Xeon spin off and the general desktops would be mobile silicon. Haswell being the culmination of that. Broadwell capitalizing on the 14nm move. Its been great for laptops. Don't get me wrong. And needed. But of all the companies that can afford a couple more dies, creating a line of truly desktop dies for the general market it would be Intel. To top it off they did have those dies. The small Xeon die with its 6/8/and later 10 cores. Would have been a perfect desktop offering. Few lasers here and there (maybe keeping it on Dual channel, not offering the top core count) and they could have segmented it well enough. But no the desktop chips became laptop chips, just to create an even bigger rift and at least first keep the dies as small as possible. I am all for chasing margin. But Intel really milked their standing to create this sloth moving beast that they are now under true competition.

What about graphics? What if they stayed at the shoddy GMA X3000 level of performance and support? That chip had programmable vertex/pixel shaders but hardware T&L performance that was below their own Netburst chips. So the team refused to accept that the time has passed when you can offload graphics duties to CPU just to sell few more of them.
Wouldn't that be great if it was the reason? I mean they needed to upgrade it but Intel was busy drawing up Larrabee at the time. They new they needed better cores. But that is almost besides the point. I think they saw an option to kill of AMD who was basically only selling APU's at the time. So upping that was important. But also they wanted to keep the die size as small as possible and keep them more segmented from their server and HDET products. So they upped the graphics for that. Cept the Iris chips but requires its own packaging.

Apple and ilk may have accelerated some of the transition but it was inevitable.

Actually I think people would have been more accepting if they didn't stay at 4 cores for so long. Quad cores came with Kentsfield in 2007. Actually they announced quad core chips very shortly after Conroe. Then they stayed with 4 cores all the way until Coffeelake in 2018. 11 years and 4 process generations with the same quad core config! Early as Ivy Bridge they should have went with 6 cores.
Again the point isn't the needed moves. But all of Intels changes since Sandybridge were to make it a better laptop chip. I don't blame Intel for trying to make money and improve profits. But realistically it would have cost them a fraction of a fraction of something they would notice to keep a true desktop line that progressed faster, and would have made them a much harder target to hit. That being the point. The reall point of separation was there at Haswell. Intel had the opportunity to have Haswell stay mobile and bring in Haswell-E or an off shoot as the desktop chip and they worked on that, they wouldn't have been an easy target. Instead Broadwell comes out and they have to do more than a shrink but add to it just to fit the pin requirements and they just up the graphics cores.
 
  • Like
Reactions: Tlh97

jpiniero

Lifer
Oct 1, 2010
14,599
5,218
136
But realistically it would have cost them a fraction of a fraction of something they would notice to keep a true desktop line that progressed faster, and would have made them a much harder target to hit. That being the point.

You're underestimating the impact 10 nm had. Intel was going to 8 cores with Cannonlake on desktop.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
That was what still 4 gens after they should have expanded core counts (can't remember but they added Coffeelake-R before they cancelled Cannondale right?) It would have been 4 shrinks before they made the move?

They basically made the move at the same time either way. Just go by their "gens". 8 gens before a real measurable increase in consumer compute power.

So while we all know 10nm really shook things up and is actively making the situation worse for them, it's not the only reason they are struggling, had they increase core count when they made Broadwell instead of expanding gpu cores or creating a desktop variation of the server cores, it would have barely cost Intel anything and they wouldn't have been so easy for AMD to catch up with. But no it had to be a laptop die the whole time.

That's my issue, people complain about Ryzen not having a gpu on the desktop, but on the other hand we get twice the core count if needed. Even if AMD wasn't doing chiplets on desktop AMD can offer that much easier because they are driven to have to use their mobile chip on desktop. Or look at Bulldozer, they recognized right away they couldn't compete on desktop without a new chip and kept the APU development up but shuttered desktop.

Intel wanted their cake and wanted to eat it as well. They wanted to keep the segmentation well defined. Wanted to keep dies as absolutely as small as possible for consumer products and basically stopped developing for the desktop market completely. Just letting their process advantage push their desktop chips well past their efficiency range. That any effort to push the desktop performance died along time ago when they decided that they weren't making consumer desktop CPU's, that they were only making Mobile or Server CPU's. Mobile always prioritized power usage and temps above outright performance.
 
  • Like
Reactions: Tlh97

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Again the point isn't the needed moves. But all of Intels changes since Sandybridge were to make it a better laptop chip.

The problems back then weren't because they merely focused on laptops. Sandy Bridge was great BTW.

The problem started with Ivy Bridge when with 22nm they started to focus on Tablets and Phones. Remember the 37% gain they touted with 22nm? We didn't see anywhere near those gains in laptop and desktop chips. Not even their 17W U chip.

The 37% number came up because they said 22nm allowed Silvermont(Atom) to clock 37% higher.

That's why 3770K was mediocre, and clocked only 100MHz higher. Everything about that process was shifted towards not desktop, not laptop, but Tablets, and eventually phones.

Haswell obviously continued it since they realized even their laptop market would be threatened if laptops continued to have 4-5 hours of battery life. And the 4770K was mediocre.