• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

News Intel 4Q25 Earnings

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
So somehow they have no foresight of any sort. I don't know if this is a) they are all stupid b) it's too chaotic c) even malice/intentional.

The decision making is probably just too slow. By the time it is finally agreed on, it was obsolete.
 
You know, they say hindsight is 20/20, but now I call theml, Intel "bad timing" Corporation.

They make a decent chip with Pantherlake, at a time when the prices in memory and SSD will likely limit it's impact. And they got their dGPU out months after crypto boom stopped. And if Xe4 rumors are accurate, then perhaps after AI market crashes they would be significantly behind competition, to the likes of B390 vs 890M.

Had they continued their Optane lineup, they could have benefitted from the AI push for their datacenter lineup. Heck, they could have got enthusiast Optane DIMMs out for PCs, and have a slow memory + fast memory tier. The DIMMs with 100-300nS latency is actually fast enough to be RAM for vast majority of use cases.
Here’s another one they will miss:

Right now, local LLMs are a relatively small market. But at some point, local models will be good enough and interest in this market will skyrocket.

Guess what? No unified memory consumer products with a big GPU from Intel.

Apple and AMD will benefit the most. AMD with Strix Halo. Apple with Pro and Max series Macs.

And by the time Intel comes out with something, market boom will probably be over.

They have truly missed every single trend in the last 10 years. Every single one. Call it unlucky or poor management or both.
 
Uh, no. Scaling laws still hold true.
Anything remotely capable will run on a big chungus rack stashed in some data center.
Uh, yes. Local models will never be as good as cloud models. It won’t change. What will change is local models will get good enough to be very useful and local hardware will also be the same.

Perfect example is M5 series adding matmul acceleration into GPUs. Now suddenly prompt processing is 4x faster than M4 series. Run something like GLM 4.7 flash locally and you can do something very useful and private.

Don’t underestimate people wanting local waifus and generating porn offline.
 
Don’t underestimate people wanting local waifus and generating porn offline.
Yeah this can't be underestimated
Perfect example is M5 series adding matmul acceleration into GPUs. Now suddenly prompt processing is 4x faster than M4 series. Run something like GLM 4.7 flash locally and you can do something very useful and private.
We had Matmul acceleration on local hardware before Apple is way late in the game
 
What will change is local models will get good enough to be very useful and local hardware will also be the same.
They won't. Scaling laws still hold true.
Overfitting a 7B model to do well in benchmarks won't make it more capable.
Don’t underestimate people wanting local waifus and generating porn offline.
The market for stonecold truecel jackoff sessions is smaller than you think, and they all roll gacha anyway.
 
Right now, local LLMs are a relatively small market. But at some point, local models will be good enough and interest in this market will skyrocket.

Guess what? No unified memory consumer products with a big GPU from Intel.
NPU is what will be used for this if anything on regular PCs.
 
So 18A yields are still crap?

13 months ago -

"Gelsinger fires back at recent stories about 18A's poor yields, schools social media commenters on defect densities and yields"
uhh this was before the node was in HVM ........ as for the yields they are not where LBT want it to be
 
Back
Top