Discussion Intel current and future Lakes & Rapids thread

Page 588 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Schmide

Diamond Member
Mar 7, 2002
5,581
712
126
Apple has a very unique solution as do many other ARM producers, but it's a very expensive monolithic chip with a super high tech socket and memory system. It works for them and to some extent AMD's APUs, but single chip solutions only get you so far. Just look at the way chiplets revolutionized the market. There will always be room for efficient single chip solutions, but they only get you so far and the cost of fabbing them is higher.

If you bought a system today or in the future would you be ok with one PCI-E 4x lane ? Because that's basically what you're getting with USB4/TB. Yes it's enough to do a lot of things, but once you try and do more than a few things at once, it will run into issues. Example extra 4k monitor + capturing a high quality video feed.

Why must ARM/RiscV take over everything ?
 
Last edited:
  • Like
Reactions: Tlh97 and lobz

Doug S

Platinum Member
Feb 8, 2020
2,202
3,405
136
Intel doesn't care about Apple. The world STILL does not run-on Macs. Apple doesn't even have a server product.

Clearly they do, or the comments about "lifestyle company" wouldn't have been made.

Prior to the ARM switch, Apple accounted for about 6% of the worldwide PC market based on units sold. Given that Apple sells only mid to high priced PCs with no low end where the majority of consumers shop, and thus only buys mid to higher end CPUs, they clearly account for well over 10% of the worldwide PC market and and even higher share Intel client CPU sales (because Apple only bought from Intel not AMD) in terms of revenue. No company wants to take a double digit revenue hit in their largest market segment.

It isn't dealing a fatal blow to Intel, but it does hurt them in revenue and obviously profit since those mid to high end CPUs are where they make the biggest chunks of profit per unit. The bigger hit was reputational, for many years Intel has marketed itself as offering the best CPUs around (the whole "Intel Inside" campaign) and now AMD is selling better x86 CPUs than Intel and since leaving Intel Apple is selling superior products (even if not beating Intel's best in raw performance, they blow them away in performance/watt which is key for laptops which are Apple's bread and butter)

Even though most consumers will never consider buying Apple due to price or because they think "PC" means "Windows", the publicity of Apple's switch might increase sales of AMD based PCs as customers no longer believe "Intel Inside" means they are automatically getting the best. So it isn't just the revenue loss from Apple no longer buying x86 CPUs.
 

Hulk

Diamond Member
Oct 9, 1999
4,191
1,975
136
Clearly they do, or the comments about "lifestyle company" wouldn't have been made.

Prior to the ARM switch, Apple accounted for about 6% of the worldwide PC market based on units sold. Given that Apple sells only mid to high priced PCs with no low end where the majority of consumers shop, and thus only buys mid to higher end CPUs, they clearly account for well over 10% of the worldwide PC market and and even higher share Intel client CPU sales (because Apple only bought from Intel not AMD) in terms of revenue. No company wants to take a double digit revenue hit in their largest market segment.

It isn't dealing a fatal blow to Intel, but it does hurt them in revenue and obviously profit since those mid to high end CPUs are where they make the biggest chunks of profit per unit. The bigger hit was reputational, for many years Intel has marketed itself as offering the best CPUs around (the whole "Intel Inside" campaign) and now AMD is selling better x86 CPUs than Intel and since leaving Intel Apple is selling superior products (even if not beating Intel's best in raw performance, they blow them away in performance/watt which is key for laptops which are Apple's bread and butter)

Even though most consumers will never consider buying Apple due to price or because they think "PC" means "Windows", the publicity of Apple's switch might increase sales of AMD based PCs as customers no longer believe "Intel Inside" means they are automatically getting the best. So it isn't just the revenue loss from Apple no longer buying x86 CPUs.

Good points but before ARM and associated devices came along Intel was doing very well. Another way to look at this is that ARM didn't so much take away market from Intel and AMD, it created new markets for Smart Phones, Tablets, and other devices. These markets simply didn't exist before. Intel and AMD may be encroaching into the new markets ARM created as much as ARM is creeping into historical AMD/Intel territory.

Now you could argue that many people have less x86 products since they have smart phones and tablets and there is some validity to that I would agree.
 

dullard

Elite Member
May 21, 2001
24,998
3,326
126
Now you could argue that many people have less x86 products since they have smart phones and tablets and there is some validity to that I would agree.
That is basically the point that I am trying to make. Other than a jump during the pandemic as people have work or school from home, PC sales have been on quite a decline:
1640894522696.png
 

Doug S

Platinum Member
Feb 8, 2020
2,202
3,405
136
Good points but before ARM and associated devices came along Intel was doing very well. Another way to look at this is that ARM didn't so much take away market from Intel and AMD, it created new markets for Smart Phones, Tablets, and other devices. These markets simply didn't exist before. Intel and AMD may be encroaching into the new markets ARM created as much as ARM is creeping into historical AMD/Intel territory.

Now you could argue that many people have less x86 products since they have smart phones and tablets and there is some validity to that I would agree.

Intel is doing better in terms of revenue and profit than they were before smartphones hit the mass market, despite having reduced share of the PC market (mostly due to AMD's recent success) even after losing Apple's business.

They are only failing to do "very well" in terms of exploiting all the additional opportunities that came along. They were so busy trying to defend their x86 monopoly they totally misjudged how big of a market smartphones would become so they offered crappy solutions they couldn't get people to use even when they were essentially free. The smartphone driven (and especially Apple driven) revenue boost that made it possible for TSMC to pass Intel for process leadership when Intel stumbled made it possible for AMD to pass them for performance leadership by giving them relative process parity with Intel for the first time in their history despite not getting access to TSMC's leading edge processes until they were over a year old.
 
  • Like
Reactions: Tlh97 and Ajay

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Don't be too focused on the ISA being the sole contributor to Intel/AMD's decline in the past decade when reality the two companies have been falling flat on it's face every other CPU generation for the past decade as well. It's extremely frustrating to be rooting for them with the amount of stumbling they have.

This is why things like Amdahl's Law and Square Root Law is valid yet at the same time you can see two different CPUs having very different perf/watt and perf/mm2 characteristics.

After Intel sold their StrongARM division, some insiders have commented how it was a mess and why they couldn't make consistently leading products. They failed because the problems within the company is deep.

Forget computers and look everywhere else. Some people are faster, better organized, and smarter than others, period. Has NOTHING to do with x86 vs ARM or process technology.

So I think they have to make a decision. Do they believe Core can get back on track? In that case, they should keep Atom focused on throughput efficiency, even if that means sacrificing performance.

That'll be perfectly fine. If you assume the output and quality of two teams are equal, you'd still end up with Atom being more efficient and less area because of the design goal of being throughput-oriented, and because of the square root law.

But a reinvigorated Core team would excel in the other end which is high low threaded performance.

Right now Atom is pretty good in regards to competition but Core is seriously behind. I think the desktop config isn't showing true power of the E cores, but we'll get to see that on laptops, and Alderlake is still in it's infancy when it comes to implementing the idea.

Great. I will be happy to be dead wrong on this one. It will be interesting to see how wide and smart these cores can become.

Yes. And I wouldn't be surprised at 2x Golden Cove from Intel in few years.* Of course AMD will give them stiff competition too. They'll be leapfrogging each other for a while.

*The hybrid config "allows" bigger cores to be even bigger than otherwise. I think that was a significant contributor in why ARM was able to gain at a brisk pace in the past few years.
 
Last edited:

Doug S

Platinum Member
Feb 8, 2020
2,202
3,405
136
After Intel sold their StrongARM division, some insiders have commented how it was a mess and why they couldn't make consistently leading products. They failed because the problems within the company is deep.


As I understand it, the reason StrongARM failed at Intel is because there was a large and powerful contingent of people who believed in "x86 everywhere" and didn't want to help prop up a competing architecture by giving it access to Intel's latest process. They instead wanted to see x86 pushed downmarket to where ARM was at the time, which ultimately resulted in Atom.

I would guess the beancounters didn't like it either because they couldn't charge the large premium for StrongARM chips they could for x86 chips, since they didn't have the duopoly x86 did (which was effectively a monopoly most of the time when AMD's offerings seriously lagged Intel's) so they viewed every StrongARM wafer on the leading edge as an opportunity cost versus using that wafer for x86 CPUs.

I would imagine the StrongARM team was the B or C or even D team at Intel. The x86 teams got all the best engineers, the StrongARM team got all the worst and some newbies who if they proved themselves would be promoted to the x86 team. Not hard to believe they didn't produce very good designs in such a climate.
 
Last edited:

Hulk

Diamond Member
Oct 9, 1999
4,191
1,975
136
I would imagine the StrongARM team was the B or C or even D team at Intel. The x86 teams got all the best engineers, the StrongARM team got all the worst and some newbies who if they proved themselves would be promoted to the x86 team. Not hard to believe they didn't produce very good designs in such a climate.

How many people are on one of these teams? It boggles my mind that there aren't enough talented people to fill all the teams to an "A" level. Then again high-level programming genus is a rare skill. There were exactly 2 people in my school who I think were at that level. Both went to MIT. One is an nVidia engineer and the other a geologist! They were programming Assembly and selling/pitching games to Broderbund back in the '80's. Still, if there is one at every high school in just the US and there are almost 27,000 schools, etc... just trying to do the math of these rare as hens teeth people.
 
Jul 27, 2020
15,759
9,822
106
It also depends somewhat on how much confidence the company leadership puts in these teams and the encouragement they give them. If the StrongArm team didn't get whatever they needed to perform at their very best, it's not fair to blame them for their failures.
 

Schmide

Diamond Member
Mar 7, 2002
5,581
712
126
ARM chips were produced by intel well past the StrongARMs of the late 90s with their xScale line until they sold that division to Marvel in 2006 and much further if you consider EPIC Itanium is weird child of HP's PA-RISC.
 

Doug S

Platinum Member
Feb 8, 2020
2,202
3,405
136
How many people are on one of these teams? It boggles my mind that there aren't enough talented people to fill all the teams to an "A" level. Then again high-level programming genus is a rare skill. There were exactly 2 people in my school who I think were at that level. Both went to MIT. One is an nVidia engineer and the other a geologist! They were programming Assembly and selling/pitching games to Broderbund back in the '80's. Still, if there is one at every high school in just the US and there are almost 27,000 schools, etc... just trying to do the math of these rare as hens teeth people.


There are different levels of talent, it isn't a yes or no thing. Intel might only hire chip architects who you consider to be "at that level" as far one per high school, but it isn't like they are all the same. Some of them are much better than others.

Intel's D team might still be filled with a lot of very smart people, just not as smart as those on the A team.
 

dmens

Platinum Member
Mar 18, 2005
2,271
917
136
I would imagine the StrongARM team was the B or C or even D team at Intel. The x86 teams got all the best engineers, the StrongARM team got all the worst and some newbies who if they proved themselves would be promoted to the x86 team. Not hard to believe they didn't produce very good designs in such a climate.

Nah, that is not true. The ARM team was in Intel Massachusetts, which was about as far away from the Intel mothership as you can get, and as a result did not get infected by nearly as much of the mothership toxicity and politics. There were a lot of smart people passing through that place, I knew a lot of them.

On the other hand, the Intel big core x86 teams in the US were literally low pass talent filters where the most toxic and political characters were sent to the top of both management and engineering. There is a reason the US-based Intel CPU teams have not produced a technically competitive product in decades without a massive process advantage and needed to be bailed out by Intel Israel repeatedly.

The atom team is in Intel Austin which is somewhat removed from the mothership which again explains their outsized capabilities compared to the team size relative to other US based teams. But it is all relative since there are numerous companies in Austin that pay far more than Intel. Intel is a place to rest and vest, not to work and innovate.
 

Mopetar

Diamond Member
Jan 31, 2011
7,797
5,899
136
How many people are on one of these teams? It boggles my mind that there aren't enough talented people to fill all the teams to an "A" level. Then again high-level programming genus is a rare skill. There were exactly 2 people in my school who I think were at that level. Both went to MIT. One is an nVidia engineer and the other a geologist! They were programming Assembly and selling/pitching games to Broderbund back in the '80's. Still, if there is one at every high school in just the US and there are almost 27,000 schools, etc... just trying to do the math of these rare as hens teeth people.

It's not just a matter of talent, but also having a cohesive team that can tackle this kind of complex work together. I've seen plenty of talented engineers that just can't work with others at all which puts a lot of their talent to waste. If the corporation is dysfunctional in any number of different ways it can lead to a lot of smart people spending more time figuring out how to ultimately work against each other than they do working on anything productive.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,482
14,434
136
Nah, that is not true. The ARM team was in Intel Massachusetts, which was about as far away from the Intel mothership as you can get, and as a result did not get infected by nearly as much of the mothership toxicity and politics. There were a lot of smart people passing through that place, I knew a lot of them.

On the other hand, the Intel big core x86 teams in the US were literally low pass talent filters where the most toxic and political characters were sent to the top of both management and engineering. There is a reason the US-based Intel CPU teams have not produced a technically competitive product in decades without a massive process advantage and needed to be bailed out by Intel Israel repeatedly.

The atom team is in Intel Austin which is somewhat removed from the mothership which again explains their outsized capabilities compared to the team size relative to other US based teams. But it is all relative since there are numerous companies in Austin that pay far more than Intel. Intel is a place to rest and vest, not to work and innovate.
I want to expand on what dmens said. This is not about Intel, BUT it IS about the kind of atmosphere that I think killed Intel (almost). I worked for a company bigger than Intel, and retired from there. I was trying to improve things, and one example is that I was speaking to a senior vice president about a matter that he agreed with me on, and his response was "it is beyond my pay grade to effect such a change." His boss did not agree with him, even though he, and everybody below him knew this action to be correct, it did not happen.

THIS is the kind of toxicity that dmens is talking about, and I am sure it happened.
 

dmens

Platinum Member
Mar 18, 2005
2,271
917
136
I want to expand on what dmens said. This is not about Intel, BUT it IS about the kind of atmosphere that I think killed Intel (almost). I worked for a company bigger than Intel, and retired from there. I was trying to improve things, and one example is that I was speaking to a senior vice president about a matter that he agreed with me on, and his response was "it is beyond my pay grade to effect such a change." His boss did not agree with him, even though he, and everybody below him knew this action to be correct, it did not happen.

THIS is the kind of toxicity that dmens is talking about, and I am sure it happened.

Hah, that is nothing compared to the mess Intel is in. For a long time Intel was in a position where it had no competition and a culture where all new ideas had to beat x86 xeon margins. This essentially killed all incentive to innovate and take risks. The Opteron threat was largely handled with contra-revenue and a massive process/manufacturing edge. The subsequent Bulldozer disaster ensured a decade of no competition, which meant that screwups were not held accountable since there were essentially no financial consequences. As a result management shifted to the most conservative strategy: self protection. They did that by stacking senior engineering with loyalists. Then came the 2016 layoffs, guess who got it the hardest?

So it amuses me when people say, AMD/NVIDIA/Apple/ARM can do this-and-that so Intel ought to be able to do it too, nah, it is not just a matter of the electrons spinning the same way regardless of company. Culture matters a lot. When the rumormongers/leak-freaks here are bedazzled by the glorious powerpoint roadmap future, my take is: even if you acknowledge those moonshot goals might be viable in that timeframe given the best team in the world executing at 100%, does Intel have that team? Heck no.
 

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
Hah, that is nothing compared to the mess Intel is in. For a long time Intel was in a position where it had no competition and a culture where all new ideas had to beat x86 xeon margins. This essentially killed all incentive to innovate and take risks. The Opteron threat was largely handled with contra-revenue and a massive process/manufacturing edge. The subsequent Bulldozer disaster ensured a decade of no competition, which meant that screwups were not held accountable since there were essentially no financial consequences. As a result management shifted to the most conservative strategy: self protection. They did that by stacking senior engineering with loyalists. Then came the 2016 layoffs, guess who got it the hardest?

So it amuses me when people say, AMD/NVIDIA/Apple/ARM can do this-and-that so Intel ought to be able to do it too, nah, it is not just a matter of the electrons spinning the same way regardless of company. Culture matters a lot. When the rumormongers/leak-freaks here are bedazzled by the glorious powerpoint roadmap future, my take is: even if you acknowledge those moonshot goals might be viable in that timeframe given the best team in the world executing at 100%, does Intel have that team? Heck no.
You've repeatedly claimed that Alder Lake, as it exists today, was outright impossible. So are we really supposed to take your opinion on that seriously?
 
  • Like
Reactions: Zucker2k and mikk

dmens

Platinum Member
Mar 18, 2005
2,271
917
136
You've repeatedly claimed that Alder Lake, as it exists today, was outright impossible. So are we really supposed to take your opinion on that seriously?

If you had the slightest bit of honesty, you would have remembered that I stated Intel can take Gracemont perf higher than my estimates if they pumped crazy power into it, but I did not expect Intel to because it would defeat the entire purpose of a low power core to augment the big core. To quantify "crazy power", it would have to be significantly over what Tremont received (~2.5W per core maximally). Turns out I was wrong on the latter: as Anandtech noted, a single Gracemont core on Alderlake can take over 10 watts to achieve a benchmark result, which is even more than I guessed Intel would allow even under the most desperate circumstances. So I was wrong only on how desperate Intel was, not the technical aspects.

On the other hand, you said Gracemont can scale to 4ghz simply via process, specifically 10 SF -> 10 ESF, and still remain iso-power. Specifically, you drew a straight line from Icelake to Tigerlake clocking on a single SKU and extrapolated linearly. And yet, in order to get Gracemont clocks to 3.9ghz, it required a massive voltage boost, well over what Intel would maximally put into Tremont. So the performance was not due to process scaling as you repeated claimed ad nauseum, it was voltage and at huge power cost.

So you are totally and utterly wrong, as usual.
 
Last edited:

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
If you had the slightest bit of honesty, you would have remembered that I stated Intel can take Gracemont perf higher than my estimates if they pumped crazy power into it

No, you repeatedly insisted that 4GHz, even for an overclock was outright impossible. See #10,486. Or in your own words:

I guarantee you Atom was not designed to 3.3ghz. It is just the boost clock Intel can hit by shoving unlimited power into the part

So, how can it hit 4+GHz if 3.3GHz is the absolute limit? You even outright claimed that a leak of Alder Lake's clocks, accurate to the MHz, was completely absurd. And I'll do you the favor of ignoring that embarrassing incident with your diploma...

On the other hand, you said Gracemont can scale to 4ghz simply via process, specifically 10 SF -> 10 ESF, and still remain iso-power

If you've noticed, those VF curves for Willow Cove are pretty much the exact same scaling we see for Gracemont. Do actually read an Alder Lake review when you get the chance. Might learn something.

But why am I even bothering. You claimed we were all idiots for thinking Alder Lake would even release this year, so surely there's nothing to even discuss, right?
 
  • Haha
  • Like
Reactions: Zucker2k and mikk

dmens

Platinum Member
Mar 18, 2005
2,271
917
136
No, you repeatedly insisted that 4GHz, even for an overclock was outright impossible. See #10,486. Or in your own words:

So, how can it hit 4+GHz if 3.3GHz is the absolute limit? You even outright claimed that a leak of Alder Lake's clocks, accurate to the MHz, was completely absurd.

JFC... this is just utterly sad on your part.

"Designed to 3.3ghz" (what I said) and "run at 4ghz" (what you said) are two entirely different things. "Designed to 3.3ghz" means a specific voltage/frequency corner that the design is modeled to run at during the design process. It is called "static timing analysis". If you put significantly more voltage than that voltage modeling point into the chip, it will run faster at higher power. DUH. Here is another engineering fact you don't know: absolutely no one models chips at the absolute max voltage. It is counterproductive, the chip will end up running hotter at lower voltages because the design changes forced at max voltage are being pushed well past the curve for minimal gains.

Again as usual, you have absolutely no idea what you are talking about.

And I'll do you the favor of ignoring that embarrassing incident with your diploma...

Why, people pay 6 figures these days for that kind of paper just to land an interview at the place I work now LOL. Maybe if you had that piece of paper, you just might know what "static timing analysis" is.

If you've noticed, those VF curves for Willow Cove are pretty much the exact same scaling we see for Gracemont. Do actually read an Alder Lake review when you get the chance. Might learn something.

But why am I even bothering. You claimed we were all idiots for thinking Alder Lake would even release this year, so surely there's nothing to even discuss, right?

Seriously? Those VF curves from Intel don't even have *labels*. You are going to claim identical scaling on four different designs running at four different voltages and frequencies now, from unlabeled graphs.

By the way, you just quoted yourself saying process frequency scaling as the justification of a 3.9ghz Gracemont. So, own goal, wrong again.
 

Doug S

Platinum Member
Feb 8, 2020
2,202
3,405
136
The atom team is in Intel Austin which is somewhat removed from the mothership which again explains their outsized capabilities compared to the team size relative to other US based teams. But it is all relative since there are numerous companies in Austin that pay far more than Intel. Intel is a place to rest and vest, not to work and innovate.

What "outsize capabilities"? When has there been an Atom that didn't suck? Did I miss when real products people wanted actually shipped with that POS?
 

dmens

Platinum Member
Mar 18, 2005
2,271
917
136
What "outsize capabilities"? When has there been an Atom that didn't suck? Did I miss when real products people wanted actually shipped with that POS?

The Austin atom team is smaller and less funded. It is also on the receiving end of quite a bit of abuse from the "prestige" teams (I use that term sarcastically).
 

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
JFC... this is just utterly sad on your part.

"Designed to 3.3ghz" (what I said) and "run at 4ghz" (what you said) are two entirely different things. "Designed to 3.3ghz" means a specific voltage/frequency corner that the design is modeled to run at during the design process. It is called "static timing analysis". If you put significantly more voltage than that voltage modeling point into the chip, it will run faster at higher power. DUH. Here is another engineering fact you don't know: absolutely no one models chips at the absolute max voltage. It is counterproductive, the chip will end up running hotter at lower voltages because the design changes forced at max voltage are being pushed well past the curve for minimal gains.

Since you don't appear to remember your own words:

I guarantee you Atom was not designed to 3.3ghz. It is just the boost clock Intel can hit by shoving unlimited power into the part.
So, again, if 3.3GHz the max, as you claimed explicitly, then how is 4+GHz possible?

And, again, why did you claim that the accurate leak of the 12900k's clocks was laughable?

Seriously? Those VF curves from Intel don't even have *labels*. You are going to claim identical scaling on four different designs running at four different voltages and frequencies now, from unlabeled graphs.

By the way, you just quoted yourself saying process frequency scaling as the justification of a 3.9ghz Gracemont. So, own goal, wrong again.
Uh, they did have specific points labeled, and since my extrapolation was correct while yours was almost a GHz off, it's hilarious that you're still doubling down. But at least you're now indirectly acknowledging that 3.9+GHz is possible, so that's some progress.

And oh yeah, how are we discussing this at all when you claimed Alder Lake shouldn't even be out?
 
  • Like
Reactions: mikk

Abwx

Lifer
Apr 2, 2011
10,847
3,297
136
If you've noticed, those VF curves for Willow Cove are pretty much the exact same scaling we see for Gracemont.

Assuming that E and P cores have the same frequency at a given voltage then it means that past 3.7 GHz for all cores the E cores are uselessly overvolted since they use the same supply rail as the P cores.

This completely anihilate the efficency gain that could be yielded if all cores were working at 3.7GHz.

Guess that when ADL design was started Intel had no clue that AMD would release a 16C/32T CPU, otherwise they wouldnt had released such an half baked design.
 
  • Like
Reactions: Tlh97

Abwx

Lifer
Apr 2, 2011
10,847
3,297
136
They don't, though. Certainly there are some inefficiencies from a shared rail, but it's not nearly that drastic.

More than drastic actually..

At 3.7GHz voltage/frequency are still in a square law curve power wise, but the higher the frequency the more it morph into a cubic and even quadratic curve, wich is the case of ADL since the last drop of frequency is squeezed out of the P cores, so overvoltage of E cores is huge when the chip run at full tilt.

Edit :

This could have been solved with a dual rail but for cost and technical reasons it would have been be a total mess and nightmare.

Technically this would deprive the P cores from a given number of supply pins that would be dedicated to the E cores, rendering the P cores supply currents more prone to drops internally due to the higher electrical resistance.
 
Last edited: