DrMrLordX
Lifer
- Apr 27, 2000
- 23,197
- 13,286
- 136
Oh so you're looking at the Q1 guidance.bleak margin looking forward
Oh so you're looking at the Q1 guidance.bleak margin looking forward
So somehow they have no foresight of any sort. I don't know if this is a) they are all stupid b) it's too chaotic c) even malice/intentional.
Intel is always either too early or too late to the boat for a whileSo somehow they have no foresight of any sort. I don't know if this is a) they are all stupid b) it's too chaotic c) even malice/intentional.
Every Megacorp in nutshell tbhMeans they leech off tightly gatekept IP/patent portfolios.
Here’s another one they will miss:You know, they say hindsight is 20/20, but now I call theml, Intel "bad timing" Corporation.
They make a decent chip with Pantherlake, at a time when the prices in memory and SSD will likely limit it's impact. And they got their dGPU out months after crypto boom stopped. And if Xe4 rumors are accurate, then perhaps after AI market crashes they would be significantly behind competition, to the likes of B390 vs 890M.
Had they continued their Optane lineup, they could have benefitted from the AI push for their datacenter lineup. Heck, they could have got enthusiast Optane DIMMs out for PCs, and have a slow memory + fast memory tier. The DIMMs with 100-300nS latency is actually fast enough to be RAM for vast majority of use cases.
Uh, no. Scaling laws still hold true.But at some point, local models will be good enough and interest in this market will skyrocket.
Uh, yes. Local models will never be as good as cloud models. It won’t change. What will change is local models will get good enough to be very useful and local hardware will also be the same.Uh, no. Scaling laws still hold true.
Anything remotely capable will run on a big chungus rack stashed in some data center.
Yeah this can't be underestimatedDon’t underestimate people wanting local waifus and generating porn offline.
We had Matmul acceleration on local hardware before Apple is way late in the gamePerfect example is M5 series adding matmul acceleration into GPUs. Now suddenly prompt processing is 4x faster than M4 series. Run something like GLM 4.7 flash locally and you can do something very useful and private.
They won't. Scaling laws still hold true.What will change is local models will get good enough to be very useful and local hardware will also be the same.
The market for stonecold truecel jackoff sessions is smaller than you think, and they all roll gacha anyway.Don’t underestimate people wanting local waifus and generating porn offline.
NPU is what will be used for this if anything on regular PCs.Right now, local LLMs are a relatively small market. But at some point, local models will be good enough and interest in this market will skyrocket.
Guess what? No unified memory consumer products with a big GPU from Intel.
Yep Nvidia did it in 2018, AMD in 2023 and Intel in 2022We had Matmul acceleration on local hardware before Apple is way late in the game
Intel's main issue is execution in DCAI and IFS to some extent at least IFS is making progress can't say that about DCAIIntels got bigger problems than getting people to buy their SoCs for local LLMs.
Intel issue is that their CPU IP plainly sucks.Intel's main issue is execution in DCAI and IFS to some extent at least IFS is making progress can't say that about DCAI
P core sure not for E core as for collateral i don't think IFS is collateral Intel was and is a manufacturerIntel issue is that their CPU IP plainly sucks.
The rest is collateral stuff.
Atrocious power and they know it and they're doing nothing about it.not for E core
18A yield being dookie is an issue fixable in a year.i don't think IFS is collateral Intel was and is a manufacturer
they are?Atrocious power and they know it and they're doing nothing about it.
Maybe IN 2-3 qtr year is a stretch18A yield being dookie is an issue fixable in a year.
should have done sooner was royal was having skill issue why did Royal even got funding smhCPU IP is a thing they can fix maybe with UC.
Given UC targets, yea.they are?
Year's normal given how bad they're having it now.Maybe IN 2-3 qtr year is a stretch
I mean it looked like a cool moonshot. why not?should have done sooner was royal was having skill issue why did Royal even got funding smh
I mean it looked like a cool moonshot. why not?
there is your answer should have handed the E core team the reign after seeing ARL ES and Royal Missing the targetGiven UC targets, yea.
I mean it would've accelerated UC by like a year maybe. meh.there is your answer should have handed the E core team the reign after seeing ARL ES and Royal Missing the target
no way, it would've been stuck in pre-def until LNC faceplanted itself in retail anyway.or two
Intel had performance projection before the ES they knew what was going to happen with the P coresno way, it would've been stuck in pre-def until LNC faceplanted itself in retail anyway.
uhh this was before the node was in HVM ........ as for the yields they are not where LBT want it to beSo 18A yields are still crap?
13 months ago -
"Gelsinger fires back at recent stories about 18A's poor yields, schools social media commenters on defect densities and yields"
So yields were excellent before HVM but became crap at HVM?uhh this was before the node was in HVM ........
