Discussion Intel's past, present and future

Page 42 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

mikegg

Golden Member
Jan 30, 2010
1,925
532
136
you mean 14A? The best way would be to double pattern EUV instead of High Na and reuse Arizona for their own need when someone asks for it just ask them to pay in advance and bring Ohio online this way they can skip building more fabs and save quite a lot. For foundry they have already spent a fortune.
The learning curve is expensive where you need volume and money
How much money will they save?
 

DavidC1

Golden Member
Dec 29, 2023
1,714
2,780
96
How much of that is cache? How much of that is the core itself? I honestly am not taking that much stock in that figure.
And having a core that wide with presumably large core private caches could mean you can cut out an enter layer of cache, like what Apple and Qualcomm do.
We're not taking into account L2 caches when comparing core sizes. L2 caches are not core private caches. SRAM is easy to add. Lion Cove is objectively worse than every other core out there.
Why not Vmin?
Because... that's fundamental to operation. That's transistor threshold voltage where it needs to be to turn on. We've been at ~0.5-1.3V range since 2001. We went from 5V to 3.3V to 2.5V to 1.8V, then 1.5V and finally now.

Try lowering voltage to 0.3V on your CPU and tell me how it goes. No, let's be realistic and set it at 0.55V. Your frequency even if it can be run will crash. Like 200MHz crash. Yes it'll run at ultra low power, but it's unusable. Even M4 is unusable at 200MHz.

This room for voltage scaling is getting smaller and smaller because they've been doing it since 2001. All the levers are starting to be exhausted. There's no room for a massive jump to a 24-wide, 13mm2 CPU from what it is today. And unless most of the core logic is useless it will suck power like a madman too. It has to be steady and balanced so your execution doesn't fail.
 
  • Like
Reactions: Joe NYC

Geddagod

Golden Member
Dec 28, 2021
1,462
1,565
106
We're not taking into account L2 caches when comparing core sizes. L2 caches are not core private caches. SRAM is easy to add. Lion Cove is objectively worse than every other core out there.
Sorry, I shouldn't have used Apple and Qualcomm as examples there, since they use large shared caches to eliminate the need for an L3...
But L2 caches are usually counted as part of the core. Any core private cache usually is. Hence why the figure of "13mm2" without any context isn't very useful.
Because... that's fundamental to operation. That's transistor threshold voltage where it needs to be to turn on. We've been at ~0.5-1.3V range since 2001. We went from 5V to 3.3V to 2.5V to 1.8V, then 1.5V and finally now.

Try lowering voltage to 0.3V on your CPU and tell me how it goes. No, let's be realistic and set it at 0.55V. Your frequency even if it can be run will crash. Like 200MHz crash. Yes it'll run at ultra low power, but it's unusable. Even M4 is unusable at 200MHz.

This room for voltage scaling is getting smaller and smaller because they've been doing it since 2001. All the levers are starting to be exhausted. There's no room for a massive jump to a 24-wide, 13mm2 CPU from what it is today. And unless most of the core logic is useless it will suck power like a madman too. It has to be steady and balanced so your execution doesn't fail.
Except if you have 2 or 3x the IPC of GLC, you can run at ridiculously low clocks and still have very good performance.
And 1T perf is the entire point of RYC it seems like, even if you go to the Ahead computing website you will see them prioritize ST perf and also perf/watt.
And current cores also run much, much higher in even server than their Vmin. I'm guessing Royal Core would be running their cores much closer to Vmin, and take advantage of shifting the V/F curve up to get much better perf/watt.
I think the people in charge of RYC think that just different physical designs(like AMD), or even outright just a different E-core arch (like Apple and Qualcomm use) to significantly lower Vmin for the markets that need core density, or maybe used as a LP island for mobile.
 
  • Like
Reactions: Io Magnesso

Joe NYC

Diamond Member
Jun 26, 2021
3,387
4,970
136
Rather than claiming "Grok told me this" without context and expecting us to take it at face value, if you had simply said "Grok pointed me to this article claiming CoWoS is around 7x more expensive" then people are going to trust that far more.

The 7x was a projection or a design goal. Additional searches tried to find the actual, and that's how Grok came up with 3x to 7x range

Because you may think you have "experience" in how LLMs work, but if you believe ridiculous claims that LLMs have reasoning abilities you clearly don't have any idea how they actually work. AI could never answer a question like that without finding a link to a human provided answer. Maybe someday we'll have one that can, but I highly doubt LLMs will be a part of the mechanism that gets us there. It is better than what came before, but it is not able to reason.

I think you are way underestimating the abilities of LLM models. CoPilot inadvertently came up with a different approach (to find the cost ratio between CoWoS and InFO), to go from the other side, from the cost of wafer (but mixed up the units).

But then, I prompted Grok to try this approach (starting from cost of wafer) and then, using the same context it already had, meshed together its first reasoning approach with the 2nd one. And came up with the cost of CoWoS in the same range as intel Foveros and various additional wafer cost estimates.

In either case, Intel took on a whole bunch of extra overhead with its implementation of its advanced packaging, and this albatross is going to be hanging over Intel's ability to maintain market share on the low end of the client segment.

Which brings us back to the subject at hand - Gelsinger and his legacy. This information, on high cost of Intel packaging is another piece of evidence how delusional Gelsinger was - in ignoring the costs. Probably thinking Intel would be so far ahead in performance and technology that Intel can charge whatever price they want, and customer will pay.
 
  • Like
Reactions: igor_kavinski

Joe NYC

Diamond Member
Jun 26, 2021
3,387
4,970
136
@Joe NYC In addition to that they hallucinate. That's the real problem.

It's not a "problem", it is a performance parameter that all the models are addressing. Which, stated differently, is for the model to come up with wrong answer with high confidence. So they are addressing both ends, getting fewer answers wrong, and also assessing (and informing of user) of level of confidence.

Also personally I ignore AI search result at the top because it's been shown the reliance on it decreases our capacity.

I take it as the old "I am feeling lucky" button on Google. There is a possibility that's what I am looking for, so I glance at it. And you don't have to click on anything to take you away from the search results. But overall, I find Google to be quite weak in what it presents as a result of it own LLM (in my experience).
 
  • Haha
Reactions: MuddySeal

Doug S

Diamond Member
Feb 8, 2020
3,378
5,953
136
Because... that's fundamental to operation. That's transistor threshold voltage where it needs to be to turn on. We've been at ~0.5-1.3V range since 2001. We went from 5V to 3.3V to 2.5V to 1.8V, then 1.5V and finally now.

Try lowering voltage to 0.3V on your CPU and tell me how it goes. No, let's be realistic and set it at 0.55V. Your frequency even if it can be run will crash. Like 200MHz crash. Yes it'll run at ultra low power, but it's unusable. Even M4 is unusable at 200MHz.

This room for voltage scaling is getting smaller and smaller because they've been doing it since 2001. All the levers are starting to be exhausted. There's no room for a massive jump to a 24-wide, 13mm2 CPU from what it is today. And unless most of the core logic is useless it will suck power like a madman too. It has to be steady and balanced so your execution doesn't fail.

Do you happen to have any idea what happened with Intel's research into near threshold voltage operation? IIRC they were talking about this in the early 2010s, targeting voltages as low as 0.3v.

Something's gonna have to give with AI, they aren't going to be able to source sufficient power to build everything they talk about, nor support the cost of operating it without profits to pay for it at some point. If Intel had been able to get that to work that sure would be an advantage for their foundry right about now...
 
  • Like
Reactions: Io Magnesso

511

Diamond Member
Jul 12, 2024
3,405
3,292
106
Do you happen to have any idea what happened with Intel's research into near threshold voltage operation? IIRC they were talking about this in the early 2010s, targeting voltages as low as 0.3v.

Something's gonna have to give with AI, they aren't going to be able to source sufficient power to build everything they talk about, nor support the cost of operating it without profits to pay for it at some point. If Intel had been able to get that to work that sure would be an advantage for their foundry right about now...
Intel has done lot of research but sometimes they can(as in disband not coco cola can) the useful stuff and go into stuff that doesn't even makes sense
 

DavidC1

Golden Member
Dec 29, 2023
1,714
2,780
96
Do you happen to have any idea what happened with Intel's research into near threshold voltage operation? IIRC they were talking about this in the early 2010s, targeting voltages as low as 0.3v.
They just stopped talking about it.

That's why germanium transistors were the rage for a while because it had Vth of 0.3V but it's still not here. Also it's one thing for a single transistor, it's another thing for a whole CPU. Also what frequency will they run at? Low Vth and germanium is good, but not so much at 2GHz.

Advancements were made for sure for CPUs to reach 6GHz. Years ago they were talking about more resilient circuitry techniques for higher frequency operation and they might have been using that now. Those were the good old days of IDF.
Which brings us back to the subject at hand - Gelsinger and his legacy. This information, on high cost of Intel packaging is another piece of evidence how delusional Gelsinger was - in ignoring the costs. Probably thinking Intel would be so far ahead in performance and technology that Intel can charge whatever price they want, and customer will pay.
Gelsinger was likely betting more than that, such as the lockdown premiums continuing. When that stopped the demand plummetted.

There's a company that offers ebike renting service and they haven't been making much money either. The ones where they are randomly put in spots with only a phone needed to unlock, pay and use them. I think they are not far off from bankruptcy. They appeared around the same time period. So few companies also bet on that continuing. They were also way too expensive. But convenience beats not having debt for some people I guess.
Sorry, I shouldn't have used Apple and Qualcomm as examples there, since they use large shared caches to eliminate the need for an L3...
But L2 caches are usually counted as part of the core. Any core private cache usually is. Hence why the figure of "13mm2" without any context isn't very useful.
L2's aren't really private. It's easy to add, hence why we don't include them in the figures. Everybody can do them. But can they actually make a good CPU? The smarts are in the uarch, not the higher level caches.

Also unless the "private" cache is in the range of 10MB, vast majority of that 13mm2 is the core, and I doubt it was included.

AMD also beats Lion Cove substantially. Remember Zen 5 is on the older process.
 
Last edited:
  • Like
Reactions: Joe NYC

itsmydamnation

Diamond Member
Feb 6, 2011
3,055
3,862
136
Which is fine. When Wall Street finally gets bored of AI, all the companies will slash spending.
i feel like A.I bubble could be worse then dot com in terms of impact to wall street.

like tech bro CEO's can't keep telling everyone they are going to take their jobs and then at the same time ask for more VC funding at some point the rubber is going to have to meet the road.
 

NTMBK

Lifer
Nov 14, 2011
10,424
5,740
136
https://www.phoronix.com/news/Intel-Ends-Clear-Linux
How much does Intel pay these engineers. The CEO can easily get a lower bonus or salary and keep these guys but instead cuts them.
Like I said profit before open-source for massive corps.

If Intel had zero benefit from supporting open source they wouldn't touch it.
They can support open source just fine without running support and maintenance for an entire distro.
 

511

Diamond Member
Jul 12, 2024
3,405
3,292
106

mikegg

Golden Member
Jan 30, 2010
1,925
532
136
Ok thanks. Given that a TSMC 2nm fab costs $28 billion (estimate). Intel has $20 billion on hand right now. How do you think Intel can make a 16A 14A fab?

Nailed it:

"If we are unable to secure a significant external customer and meet important customer milestones for Intel 14A, we face the prospect that it will not be economical to develop and manufacture Intel 14A and successor leading-edge nodes on a go-forward basis," the company wrote. "In such event, we may pause or discontinue our pursuit of Intel 14A and successor nodes."
 

511

Diamond Member
Jul 12, 2024
3,405
3,292
106
But people told me 18A was getting canned not 14A 18A is a long node they will use it throughout 2030 lol
 

Joe NYC

Diamond Member
Jun 26, 2021
3,387
4,970
136
But people told me 18A was getting canned not 14A 18A is a long node they will use it throughout 2030 lol

I have not heard anyone said that. The rumor was that 18A would not be actively promoted to outside customers. But LBT did not even say that. It seems that Intel is open to selling it to 2nd tier customers.