Discussion Intel Titan, Razor and Serpent Lakes Discussion Threads

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Geddagod

Golden Member
Dec 28, 2021
1,680
1,715
136
Area matters more than ever now, given each mm^2 yielded costs more each node.
Area matters less now, since you can always just suffer worse margins if you have too but still create a competitive CPU. If you suck in perf or power, you just to eat it esentially.
Before chiplets, Intel couldn't compete add cores no matter how hard they tried, with ICL and other 14nm parts being only marginally less than reticle limit. They were what, north of 600mm2?
Intel rn spends more die area on GNR than Turin classic, but they are still able to match core counts on products built from a close-ish nodes (5nm class stuff)
It's standard Mx Max stuff and it sucks.
Why would apple bother using it atp? They can easily afford to use whatever they want.
not how it works.
Toddler caches can't handle workloads that spill into L2 (which in server is ~most of them). Infinite L2 trashing especially on i$ side, just what everyone wanted.
The L2 of these systems are pretty close to the latency in cycles as what you see in Intel, ARM, and AMD CPUs.
1767200640168.png
The lack of a L3 is where the large gap appears to be (if you added desktop systems with their full L3 configs to this chart) not the L2.
meme lolcow startups that shipped 0 parts ever do not count. they don't even have human rights as far as server multiprocessors are concerned.
If you are a startup, why would you use a cache hierarchy that has little chance of working well? It's not as if what ARM, Intel, and AMD are doing is the "new" method of cache hierarchies...
The main limitation is going back to Dunnington (or Woodcrest if you're really into having no real LLC) sucks.
This was what, almost 2 decades ago?
Who cares about Intel, they're completely irrelevant in server.
If Intel is willing to spend a bunch of die area on something as niche as AMX, why wouldn't they on something much more important in per-core perf?
 
  • Like
Reactions: Joe NYC

adroc_thurston

Diamond Member
Jul 2, 2023
8,491
11,241
106
Area matters less now, since you can always just suffer worse margins if you have too but still create a competitive CPU
I sure love running a worse business for no reason.
Why would apple bother using it atp?
I dunno.
The lack of a L3 is where the large gap appears to be (if you added desktop systems with their full L3 configs to this chart) not the L2.
You still do not understand the problem.
If you are a startup, why would you use a cache hierarchy that has little chance of working well?
Why would Ampere Computing do a custom core that Sucks? Startup things man.
This was what, almost 2 decades ago?
Nothing changed in shared L2 land since then.
If Intel is willing to spend a bunch of die area on something as niche as AMX, why wouldn't they on something much more important in per-core perf?
idk Intel is dumb.
They're also irrelevant in server.
 
  • Like
Reactions: lightmanek

adroc_thurston

Diamond Member
Jul 2, 2023
8,491
11,241
106
Yup, the market share leader is irrelevant.
They own dinosaur segments that in no way drive platform/perf CAGR.
Reminder for everyone not living in their own traumwelt - last market share unit data still has AMD around 27.5%.
AMD rev share is liike at 40%.
And boyyyy it'll get a lot lot worse next year.
 

511

Diamond Member
Jul 12, 2024
5,431
4,854
106
That's easy to answer, Intel has no other Datacenter AI story at the moment... so they need to have something to claim that they are still relevant in the AI space...
At least AMX gets proper support in Pytorch and Llama.cpp and GNR has decent Memory Bandwidth thanks to MRDIMMs you can run High Parameters model without breaking the bank can't say the same about Gaudi/dGPU which are MIA and may or may not happen
 
Last edited:

511

Diamond Member
Jul 12, 2024
5,431
4,854
106
Thanks for posting the slide. That looks like quite a lot of bandwidth from LPDDR5x. It looks like 16 channels (of 64 bit channel equivalent).
16 Channel LPDDR5X-9533 will net you that bandwidth
 
  • Like
Reactions: Joe NYC

Doug S

Diamond Member
Feb 8, 2020
3,822
6,760
136
@Tigerick is incorrect, Nvidia's own slide from GTC 2025 said that Vera was using LPDDR5X... so I have no idea where he is getting LPDDR6 from because I have corrected him on this before...

He has a bug up his butt for LPDDR6 for some reason. He thinks everyone is going to switch to LPDDR6 at a super aggressive pace, not sure why he's so obsessed with it.
 

Geddagod

Golden Member
Dec 28, 2021
1,680
1,715
136
That's easy to answer, Intel has no other Datacenter AI story at the moment... so they need to have something to claim that they are still relevant in the AI space...
When the product got defined, they prob didn't think they needed a "data center" ai story (at least not that they needed it so sorely), nor would they likely have thought that they weren't going to be so uncompetitive in general data center. Not that GNR isn't closing the gap significantly on AMD anyway, for at least the standard skus.
GNR was publicly supposed to launch against Genoa for much of its lifetime, and leaked roadmaps indicated it would have launched even earlier than Genoa.
And let's not forget that SPR had AMX too, not just GNR. And there's no way Intel thought that they had to have an AI story, or needed AMX to grab niche wins where they couldn't win general server workload wins, for that part.
 
  • Like
Reactions: Joe NYC

Tangopiper

Member
Nov 11, 2025
30
37
51
Feel free to express your speculation.
This doesn't line up with Intel's recent cadence unless I'm reading it wrong?

Or are we going back to yearly desktop releases post Nova Lake. I suppose it's a good sign of their improved execution if they get away from the Meteor/Panther desktop misses.