News Intel 4Q25 Earnings

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

KompuKare

Golden Member
Jul 28, 2009
1,235
1,610
136
So yields were excellent before HVM but became crap at HVM? :rolleyes:
It would be insightful if someone had a table of Intel foundry claims and reality over - let's say - the past decade.

Actually since a lot of this was said at earnings calls and by people in certain roles...

...Well, unless Intel are simply too big too fail, shouldn't there be SEC sanctions and more?
 

511

Diamond Member
Jul 12, 2024
5,394
4,816
106
So yields were excellent before HVM but became crap at HVM? :rolleyes:
Tell me have you seen any large projects ? Every project has milestone or targets they have to hit in between besides as for crap yield Intel said Yield on internal target but now where they want it to be. Yield is so complicated that one can't know truly without having actual data we are all doing guesswork
 
  • Like
Reactions: inquiss

regen1

Senior member
Aug 28, 2025
351
430
96
So 18A yields are still crap?

13 months ago -

9 December 2024: "Gelsinger fires back at recent stories about 18A's poor yields, schools social media commenters on defect densities and yields"

Gelsinger was disputing the usual Reuter's and some Taiwanese media implying
a broken/5% or 10% yields reports, I think ? And he seems correct.

LBT said this at Q4 Earnings:
My team and I are working tirelessly to drive efficiency and more output from our fabs, and while yields are in-line with our internal plans, they are still below where I want them to be.
Doesn't necessarily seem yields are "crap" or some "toilet-tier" thing. May not be great but definitely doesn't seem that bad either.
They(18A) have some issues wrt Parametirc yield and also their Defect-density(D0) curve should've reached closer to 0.1 somewhat sooner(it still hasn't yet it seems?). Compared to recent TSMC nodes' D0 progression(see N3/N5 curve) it(18A) does seem to reach lower values somewhat slower.

Now Samsung have had real issues with their initial GAA node(s) family, couldn't mass produce at levels required for a long time.
 
Last edited:
  • Like
Reactions: lightmanek

Khato

Golden Member
Jul 15, 2001
1,365
454
136
Gelsinger's statements regarding yields were... creative.

LBT doesn't feel the need to be a cheerleader and hence doesn't mind admitting that yields aren't at 95%+ where he'd like them to be. That doesn't mean they're bad or not improving.
 

jpiniero

Lifer
Oct 1, 2010
17,146
7,533
136
For Client, it's less of an issue since the majority of sales are the Core (Ultra) 5 models. Goes down to 2+0+4.

It's probally not suitable for Foundry Customers though, if Intel had any real ones.
 

Joe NYC

Diamond Member
Jun 26, 2021
4,181
5,742
136
18A yield being dookie is an issue fixable in a year.
CPU IP is a thing they can fix maybe with UC.

What is saving Intel's bacon is AMD slacking off, not being aggressive with mobile SoC development. That's what extends Intel's lease on life until UC.

Superior / inferior CPU IP is only part of the equation in mobile CPU, where SoC and cadence of refreshes matters as much or more.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,581
731
126
Those universally (ok Hexagon is fine) suck at doing modern ML.
NPU is for MS teams background blur.
Why would they suck at modern ML? You can design them so they are basically just a downscaled B200 or whatever, supporting the same operations (but skipping NVLink etc of course which will be N/A).

In the end it’s just how much silicon you allocate to it that matters.

If you only intend to use it for AI / ML stuff, an NPU will be more area efficient compared to using a regular GPU that e.g. has RT cores which will be dark silicon for this type of use cases.
 

Win2012R2

Golden Member
Dec 5, 2024
1,322
1,361
96
Yield is so complicated that one can't know truly without having actual data we are all doing guesswork
Here is my occam's razor's guess: yields are crap, and clearly it's not "just" parametric here.

Any potential external customer will demand to see actual data and get it, now they won't leak it as it will be under NDA, but them not taking it up speaks volumes. So who are Intel trying to deceive here? Can see only one target: the markets, ie: shareholders. Don't know how is that legal, but under current admin anything goes.

That doesn't mean they're bad or not improving.

It means they are bad, which in weasel words is "not where I'd like them to be". Are they improving? Most likely, but we don't know how fast, but if it was fast he'd say when they will get "to where I'd like them to be".
 
  • Like
Reactions: Tlh97 and KompuKare

adroc_thurston

Diamond Member
Jul 2, 2023
8,437
11,168
106
What is saving Intel's bacon is AMD slacking off, not being aggressive with mobile SoC development
Well duh, it's a dumb commodity market.
AMD has nicer things to worry about.
That's what extends Intel's lease on life until UC.
No it ain't.
where SoC and cadence of refreshes matters as much or more.
'cadence' is not a thing, you just need to deliver perf/BL/yaddayadda CAGR every now and then.
Why would they suck at modern ML?
Dumb VLIW machines with microscopic (and bad) SRAM piles are hardly fit for modern high performance GEMM.
You can design them so they are basically just a downscaled B200 or whatever
I understand that you've written exactly zero math kernels in your life but this is the part where you stop. Now stop.
In any case, gfx stuff tends to support allat and more. It's also very good at running GEMM.
If you only intend to use it for AI / ML stuff, an NPU will be more area efficient compared to using a regular GPU that e.g. has RT cores which will be dark silicon for this type of use cases.
RTFF barely eats your area (it's a one-off per SM).
 
  • Like
Reactions: Joe NYC

Doug S

Diamond Member
Feb 8, 2020
3,812
6,747
136
Servers are healthy and PCs are barely down.
Intel just got caught with their pants down wrt i7 capacity.

Wow you're incapable of reading what you quoted in your reply when it is only a single sentence. Maybe you need one of those cognitive tests I keep hearing about?

For a hint in case you can't figure it out, I specifically said 2026 is when PC/server demand is going to be crushed.
 
  • Like
Reactions: lightmanek

adroc_thurston

Diamond Member
Jul 2, 2023
8,437
11,168
106
Wow you're incapable of reading what you quoted in your reply when it is only a single sentence. Maybe you need one of those cognitive tests I keep hearing about?
Same to you buddy.
I specifically said 2026 is when PC/server demand is going to be crushed.
I meant 2026 units for PC and server.
Demand is fine, everyone's just gotta pay more.
Sucks but we'll manage.
 

Joe NYC

Diamond Member
Jun 26, 2021
4,181
5,742
136
For a hint in case you can't figure it out, I specifically said 2026 is when PC/server demand is going to be crushed.

PCs are widely expected to be down in 2026.

But server CPUs? That one is still an open question. We will have to wait and see.
 
  • Like
Reactions: Tlh97

Khato

Golden Member
Jul 15, 2001
1,365
454
136
It means they are bad, which in weasel words is "not where I'd like them to be". Are they improving? Most likely, but we don't know how fast, but if it was fast he'd say when they will get "to where I'd like them to be".
You might want to take a look at the last two decades of yield commentary from Intel before assuming that the lack of specifics or "not where I'd like them to be" means bad. eg, 75% yield still isn't where they'd like it to be, but is by no means bad.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,581
731
126
Dumb VLIW machines with microscopic (and bad) SRAM piles are hardly fit for modern high performance GEMM.
I understand that you've written exactly zero math kernels in your life but this is the part where you stop. Now stop.
In any case, gfx stuff tends to support allat and more. It's also very good at running GEMM.
RTFF barely eats your area (it's a one-off per SM).
The whole point of an NPU is to tailor it for AI / LLM usage (and don’t add stuff not needed for that, such as RT cores). Why else would they even exist, instead of just using a bigger iGPU?
 
Last edited:

mikegg

Platinum Member
Jan 30, 2010
2,110
653
136
We had Matmul acceleration on local hardware before Apple is way late in the game
Yep Nvidia did it in 2018, AMD in 2023 and Intel in 2022
*consumer hardware

Apple had matmul acceleration on their SoCs since 2017 with the NPU. They then added it to the CPU in 2019. They're finally added it to the GPU in 2025.

Anyways, it's about processing power, large VRAM capacity, and high VRAM bandwidth at a relatively affordable price to consumers.

Apple had the latter 2. It was missing the first, processing power. Matmul fixes this.

M5 Pro/Max/Ultra will be the best local inference machines for consumers or prosumers by far. AMD at least has something in Strix Halo. Nvidia will release their N1/N1X which should also be decent. Intel is completely missing here. Once again, completely missing the trend.
 

mikegg

Platinum Member
Jan 30, 2010
2,110
653
136
can't believe you said ram and affordable in the same sentence even relatively
Ram isn’t going to be unaffordable forever.

Regardless, unified memory is extremely affordable compared to HBM GPUs.