News Intel 4Q25 Earnings

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Joe NYC

Diamond Member
Jun 26, 2021
4,181
5,746
136
So somehow they have no foresight of any sort. I don't know if this is a) they are all stupid b) it's too chaotic c) even malice/intentional.

The decision making is probably just too slow. By the time it is finally agreed on, it was obsolete.
 
  • Like
Reactions: maddie

mikegg

Platinum Member
Jan 30, 2010
2,110
653
136
You know, they say hindsight is 20/20, but now I call theml, Intel "bad timing" Corporation.

They make a decent chip with Pantherlake, at a time when the prices in memory and SSD will likely limit it's impact. And they got their dGPU out months after crypto boom stopped. And if Xe4 rumors are accurate, then perhaps after AI market crashes they would be significantly behind competition, to the likes of B390 vs 890M.

Had they continued their Optane lineup, they could have benefitted from the AI push for their datacenter lineup. Heck, they could have got enthusiast Optane DIMMs out for PCs, and have a slow memory + fast memory tier. The DIMMs with 100-300nS latency is actually fast enough to be RAM for vast majority of use cases.
Here’s another one they will miss:

Right now, local LLMs are a relatively small market. But at some point, local models will be good enough and interest in this market will skyrocket.

Guess what? No unified memory consumer products with a big GPU from Intel.

Apple and AMD will benefit the most. AMD with Strix Halo. Apple with Pro and Max series Macs.

And by the time Intel comes out with something, market boom will probably be over.

They have truly missed every single trend in the last 10 years. Every single one. Call it unlucky or poor management or both.
 
  • Like
Reactions: Tlh97 and Joe NYC

mikegg

Platinum Member
Jan 30, 2010
2,110
653
136
Uh, no. Scaling laws still hold true.
Anything remotely capable will run on a big chungus rack stashed in some data center.
Uh, yes. Local models will never be as good as cloud models. It won’t change. What will change is local models will get good enough to be very useful and local hardware will also be the same.

Perfect example is M5 series adding matmul acceleration into GPUs. Now suddenly prompt processing is 4x faster than M4 series. Run something like GLM 4.7 flash locally and you can do something very useful and private.

Don’t underestimate people wanting local waifus and generating porn offline.
 

511

Diamond Member
Jul 12, 2024
5,395
4,816
106
Don’t underestimate people wanting local waifus and generating porn offline.
Yeah this can't be underestimated
Perfect example is M5 series adding matmul acceleration into GPUs. Now suddenly prompt processing is 4x faster than M4 series. Run something like GLM 4.7 flash locally and you can do something very useful and private.
We had Matmul acceleration on local hardware before Apple is way late in the game
 

adroc_thurston

Diamond Member
Jul 2, 2023
8,444
11,175
106
What will change is local models will get good enough to be very useful and local hardware will also be the same.
They won't. Scaling laws still hold true.
Overfitting a 7B model to do well in benchmarks won't make it more capable.
Don’t underestimate people wanting local waifus and generating porn offline.
The market for stonecold truecel jackoff sessions is smaller than you think, and they all roll gacha anyway.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,581
731
126
Right now, local LLMs are a relatively small market. But at some point, local models will be good enough and interest in this market will skyrocket.

Guess what? No unified memory consumer products with a big GPU from Intel.
NPU is what will be used for this if anything on regular PCs.
 

511

Diamond Member
Jul 12, 2024
5,395
4,816
106
  • Like
Reactions: Joe NYC

511

Diamond Member
Jul 12, 2024
5,395
4,816
106
So 18A yields are still crap?

13 months ago -

"Gelsinger fires back at recent stories about 18A's poor yields, schools social media commenters on defect densities and yields"
uhh this was before the node was in HVM ........ as for the yields they are not where LBT want it to be