• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Discussion Intel current and future Lakes & Rapids thread

Page 747 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
I do agree with Tom's point though, streamlining the design process should be a focus for Intel in order to further precent delays.
 
Nah. I wouldn't be surprised if he's already left the company. They've stripped him of any actual leadership role already. Probably just waiting for a good time to announce his departure.
He will make a comeback with a vengeance. I promise you! We haven't seen the last of him yet. He's too ambitious to let a little admonishment and demoting get him down.
 
You mean MLID over promising and underdelivering?
This is the reason people were disappointed in RDNA 3 not beating the 4090, despite AMD never claiming it could, and their own slides at best showed it slightly behind the 4090 instead of a tier behind. This one isn't as much on MLID though as other leakers claiming 3x performance, but still...
I also can't wait for MLID "nerfed MTL" update when he claimed they axed redwood cove IPC uplifts from 15-21% to the 5-10% I'm pretty sure it's going to be, a couple months before launch (despite that not even being possible lol).
ALSO is it weird that other leakers were claiming MTL desktop was only prob going to launch mid-low end later than mobile MONTHS before MLID? Is it weird that RPL-R has also already been leaked a couple weeks ago?
Exactly. You can find comments from notable insiders (e.g. Retired Engineer) weeks ago saying that SRF-AP was supposed to have been canceled a while back. If it's actually back, even as a compromised derivative of SRF-SP, then that would still be an improvement. Not ideal, but it would be something.

And yes, the flip flopping he's done on desktop is ridiculous. I'm not going to claim that Intel has made no changes to their roadmap, but it's clear like half of his predictions have been BS.

And definitely get your snacks prepared for Redwood Cove, because that meltdown is going to be hilarious. I think even 5-10% is optimistic.
 
He will make a comeback with a vengeance. I promise you! We haven't seen the last of him yet. He's too ambitious to let a little admonishment and demoting get him down.
I'm sure there's a different company he can convince of that. Just probably not Intel. The corporate ladder rarely goes two ways.
 
He will make a comeback with a vengeance. I promise you! We haven't seen the last of him yet. He's too ambitious to let a little admonishment and demoting get him down.
I want to know who at Intel was lighting up a big wad of green when they thought yeah let's hire Raja Koduri as if he were some blessed god in gpu architecture?Raja had his bangers in the past but he's biting off more than he can chew of the pillow nowadays.
 
MLID/RGT-
-Amazing performance for RDNA3. Likely a fundamental misunderstanding of dual issue, when AMD counts it as part of the perf/clock gain. RGT specifically said 2.5x over the predecessor! But MLID claimed some fantastic things as well.
-Can't forget Zen 4 having 29% IPC.

To be fair to MLID, he was one of the less optimistic leakers about RDNA 3 and all but called out RGT's over-optimism as bs.

The most recent MLID egg-on-face I can think of is when he claimed Genoa was "full of accelerators."

But for RDNA 3 it's pretty obvious AMD failed to hit their targets so there's only so much blame you can place on the leakers (although RGT deserves at least some blame for riding the hype train so much harder than others). If anyone was leaking "mediocre performance and even worse efficiency gains" any more than a few months before launch, then that would probably be "being right for the wrong reasons."
 
Other than the power consumption issue, architecture performance wise, hasn't Intel exceeded expectations so far? Why would it falter now?
In what way have they exceeded expectations? They've managed only two new generational uarch gains since Skylake, and with little regard for power and area, at least with Sunny Cove.

I think the logic is pretty simple. Since the Oregon team was dissolved, Intel's only had one team working on Core, and they've all been focused on Lion Cove. Redwood Cove, Raptor Cove, whatever. They get the scraps. On top of that, Redwood Cove was supposed to be a 2022 core. Even AMD has two teams for Zen, and they've consistently taken 1.5-2 years for every new gen. To expect Intel to provide such large gains year over year with just one, well the math simply doesn't work.
 
In what way have they exceeded expectations?
I wasn't expecting a big improvement over Alder Lake but they managed to tweak their process technology to eke out decent performance out of Raptor Lake without hitting 400W. In that way, I think they exceeded at least my expectations. I think they will do something similar with Redwood Cove too where it should be at least competitive with whatever AMD offers, though maybe not on the power efficiency front.
 
I wasn't expecting a big improvement over Alder Lake but they managed to tweak their process technology to eke out decent performance out of Raptor Lake without hitting 400W. In that way, I think they exceeded at least my expectations. I think they will do something similar with Redwood Cove too where it should be at least competitive with whatever AMD offers, though maybe not on the power efficiency front.

Redwood Cove will be exclusively in mobile platforms. If they don't beat AMD in efficiency, it's dead in the water. At least on desktop you can sort of get away with burning huge amounts of power. In a laptop? No.
 
I wasn't expecting a big improvement over Alder Lake but they managed to tweak their process technology to eke out decent performance out of Raptor Lake without hitting 400W. In that way, I think they exceeded at least my expectations. I think they will do something similar with Redwood Cove too where it should be at least competitive with whatever AMD offers, though maybe not on the power efficiency front.
Raptor Lake was indeed a much better improvement than I expected, but how much of that is design vs fab is impossible to say. But either way, "much better" means eeking out a couple hundred MHz. That's all well and good, but they need much more radical changes to be competitive long term.
 
Still didn't impact their ability to sell Alder Lake laptops.

Uhh

You sure about that? Aren't they losing marketshare?

Raptor Lake was indeed a much better improvement than I expected, but how much of that is design vs fab is impossible to say. But either way, "much better" means eeking out a couple hundred MHz. That's all well and good, but they need much more radical changes to be competitive long term.

The real problem with Raptor Cove is that it still isn't area-efficient enough or power-efficient enough for them to put more than 8 of them on a desktop SKU. They're too dependent on their e-cores.
 
You sure about that? Aren't they losing marketshare?
In client, they seem to have been flat, or even slightly gaining market share over the last year or two. It's server that's driving their big losses right now.
The real problem with Raptor Cove is that it still isn't area-efficient enough or power-efficient enough for them to put more than 8 of them on a desktop SKU. They're too dependent on their e-cores.
They could have done 12+0 instead, but it would lose to 8+16 pretty badly in MT workloads. Atom's area efficiency is pretty much its reason for existing right now, so I doubt that will fundamentally change any time soon. The main issue with Intel's big core area is cost competitiveness in server.
 
Last edited:
In client, they seem to have been flat, or even slightly gaining market share over the last year or two. It's server that's driving their big losses right now.

At the expense of margins, though. Seems like their revenue is not doing all that well. And yes server is the big(ger) loser.

They could have done 12+0 instead, but it would lose to 8+16 pretty badly in MT workloads.

That's what I'm alluding to, though. If their design + process were healthier, they'd have a 16+0 part by now. Adding more e-cores creates its own problem, and Amdahl's Law prevents those e-cores from being useful beyond a certain point.
 
At the expense of margins, though. Seems like their revenue is not doing all that well.
AMD was doing even worse in the client space this last quarter. It's just that server overshadowed it for both companies.
That's what I'm alluding to, though. If their design + process were healthier, they'd have a 16+0 part by now.
Seems to be far more about Intel's monolithic, single ring design than anything process or [CPU] design related. And clearly it's enough to be competitive until/unless AMD adds another CPU chiplet. 8+32 could conceivably carry them for a long time.
Adding more e-cores creates its own problem, and Amdahl's Law prevents those e-cores from being useful beyond a certain point.
Yet they're empirically quite useful for scenarios where you'd want >8c to begin with.
 
Back
Top