• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Discussion Intel’s Unified Core

Page 11 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
I don't know where people got the idea caches are power or area inefficient. It is a "dumb" way of adding performance.
I'm not claiming that caches are power inefficient, but they definitely can be less area efficient than adding more cores. That's why Zen5c halves the L3 cache.
 
I'm not claiming that caches are power inefficient, but they definitely can be less area efficient than adding more cores. That's why Zen5c halves the L3 cache.
I'm responding to @Geddagod's claims that having large private caches is somehow a bad thing. That's why we DON'T include caches in our calculation, because it's the easiest thing to add(except L1). It's close as to copy paste as you can get.

If you are going to add anything, then first you have to consider whether it'll be better than adding cache. But the area efficiency argument is also not straightforward either, because it's extremely redundant. It's a sea of redundancy, that's why the cache portion of the die shots are so smooth.
 
I'm responding to @Geddagod's claims that having large private caches is somehow a bad thing.
It eats up a bunch of area, and Intel needs it to compensate for a bad uncore. This isn't an added bonus.
That's why we DON'T include caches in our calculation, because it's the easiest thing to add(except L1). It's close as to copy paste as you can get.
Ur right, lets not include caches. It doesn't take up space on the die, those giant cache blobs are actually just figments of our imaginations.
But the area efficiency argument is also not straightforward either, because it's extremely redundant. It's a sea of redundancy, that's why the cache portion of the die shots are so smooth.
No, it's pretty straight forward. Even if it doesn't hurt yields as much, that space is still being missed out for stuff like wider cores or just outright more cores. And it only gets worse and worse as logic scales better than SRAM with newer nodes.
There's a reason why entire cache layers (infinity cache MI300, L3 clearwater forest) are being moved off the compute tile. Cache eats up a bunch of space needed on the compute tile for other stuff.
So this isn't necessarily accurate analysis either. There's a saying "before you expand your uarch, consider if it'll be worth adding equivalent amount in caches instead".
The leading edge designs in ARM cores have very area consuming cores in terms of logic, with smaller shared caches. The leading edge design in x86 has "skinny" cores with a very fast L3.
In Intel's case, maybe for LNC they need those extremely large private caches, but that's just a result of L3 being bad and mem latency being relatively even worse vs the competition.
I'm not claiming that caches are power inefficient,
Yeah, no one has ever claimed that lol, idk where that strawman is coming from.
Delays happen not to "make it better" but because they mis-fired and needed extra time to fix things.
Not when things get redefined. Like GNR.
So a hypothetical CWF last year could have been 15% better perf/watt while arriving more than 6 months earlier.
How?
The same with Clearwater Forest arriving 3-5 months ago and performing 10-15% better. It would also have been a big impact.
Sure lol? Idk what your point with this is.
 
Not when things get redefined. Like GNR.
redefinition adds delay
No, it's pretty straight forward. Even if it doesn't hurt yields as much, that space is still being missed out for stuff like wider cores or just outright more cores. And it only gets worse and worse as logic scales better than SRAM with newer nodes.
There's a reason why entire cache layers (infinity cache MI300, L3 clearwater forest) are being moved off the compute tile. Cache eats up a bunch of space needed on the compute tile for other stuff.
also cost cause cache per sq. mm2 is not improving much when compared to logic improvement
 
I tried putting this link into the forum search but couldn't find it, so...
 
I tried putting this link into the forum search but couldn't find it, so...
It's not part of Intel Unified Cores though
 
I tried putting this link into the forum search but couldn't find it, so...

This forum still has a search? It feels straight out of the 90's. "Actively working on a resolution" My ass.
 
So this is 3rd future gen CPU right. After Nova and Razer. I wonder how the market would be at that point. If AI frenzy is down, I feel a global recession could put a kibosh on plans. If frenzy is still there, then we will still be stuck with DDR5 as DDR6 would be too expensive for client 🙂 Already rumors next year is only minimal phones would use LPDDR6 as its too expensive. Plus one Chinese OEM is bailing out of Ultra phones as its too expensive to manufacture.
 
Yes but as i said Unified is just a brand new Core uArch all together led by Stephen Robinson

It is really a bit boring compared to Royal Core. Royal v3 + copper shark titanlake looks great, 300% Golden Cove IPC is ST king. Current x86 ST looks lame compared to Apple.
 
Back
Top