Discussion Anyone else bored out of their mind due to mainstream CPU market stagnation?

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Jul 27, 2020
26,020
17,952
146
That's still very much a rumor. I see no benefit in Intel making a new die of a soon to be outdated, problematic core. We also don't know what clocks might be before they run into a power limit. I expect ARL and Zen 5 to perform better in just about every scenario.
Hey, what's stopping Intel from gluing together two rejected 8+0 Alder/Raptor Lake dies, each with only 6 cores active? :p

They have done it before with with P4 and (possibly because not sure what their last glue-y CPU was) most recently this: https://overclock3d.net/news/cpu_ma...th-up-to-48-cores-with-glued-together-design/
 

Wolverine2349

Senior member
Oct 9, 2022
488
166
86
That's still very much a rumor. I see no benefit in Intel making a new die of a soon to be outdated, problematic core. We also don't know what clocks might be before they run into a power limit. I expect ARL and Zen 5 to perform better in just about every scenario. Consoles are limited to 8 Zen 2 cores so I don't know what you think you are missing out on.


Well does not matter what consoles are on. Just because consoles are on only 8 cores does not mean PC games cannot push farther. I mean PS4 and XBOX Ones had embarrassingly bad 8 core Jaguar CPUs (like poor mans Bulldozer) which got spanked severely bad by Intel Quads at the time.

Most games do not use more than 8 cores. But a few are starting to get some benefit and its only growing.

And there will be some cases where the 12+0 if it is made may perform better than Zen 5 and even Arrow Lake. Afterall Comet Lake which had much worse IPC than Zen 3 did outperform or equal Zen 3 in a few games and Zen 3 only beat it by marginal amount in most games.

And the IPC uplift of Zen 5 and Lion Cove is even smaller than Zen 3 was over COmet Lake. And Raptor Cove will have 12 cores instead of 10. So I imagine while it will not be as good in most cases, it will be able to hold its own better than 10 core Comet Lake vs Zen 3. That is assuming it is stable and reliable and does not inherit RPL 8 + 16 die problems.

It is also possible those cores of Bartlet Lake on the 12+0 die will be Lion Cove backported and not Raptor Cove cores. Heck they could even be Golden Cove. I would not write that one off just yet.
Intel would be smart to release 12 +0 Bartlett Lake. What better way to out a dent in AMD than do that as long as it is reliable. Especially if they make it lion Cove backport where as AMD on AM4 had no new CPU releases other than poor mans 5800X3Ds like 5600X3D and 5700X3D and better binned 5900X in the XTs.
 

Wolverine2349

Senior member
Oct 9, 2022
488
166
86
Hey, what's stopping Intel from gluing together two rejected 8+0 Alder/Raptor Lake dies, each with only 6 cores active? :p

They have done it before with with P4 and (possibly because not sure what their last glue-y CPU was) most recently this: https://overclock3d.net/news/cpu_ma...th-up-to-48-cores-with-glued-together-design/


If they do that I am totally not interested, If its 12 cores on a single ring bus, I am very much interested and a likely buyer assuming it is stable and does not degrade too easily.

Though if Intel does what you say it would not be a 12 + 0 die. It would instead be 2 6 + 0 dies of Golden Cove glued together. Intel does have a 6 + 0 die of Alder Lake the 12400 and 12500 and 12600 non K.
 
  • Like
Reactions: igor_kavinski
Jul 27, 2020
26,020
17,952
146
Though if Intel does what you say it would not be a 12 + 0 die. It would instead be 2 6 + 0 dies of Golden Cove glued together. Intel does have a 6 + 0 die of Alder Lake the 12400 and 12500 and 12600 non K.
Hey, maybe they have a huge stock of eDRAM lying around that they could pair with that glue-y CPU die.

And bundle 256/512GB of Optane too for some faster-than-NVMe action!

Desperate times call for desperate measures!

By the way, any Intel executive lurking around here who pitches these ideas to Intel management and gets a bonus and a raise, don't be a frickin' loser and credit me/reward me appropriately :p
 

SarahKerrigan

Senior member
Oct 12, 2014
735
2,036
136
Nuh uh. Don't take the easy way out. Don't be lazy. DISSECT my post and tell me why it gives you the impression that I'm NOT okay :)

Well, okay.

Intel not doing AVX512 on Atom has little to do with being "wimpy" and everything to do with the fact that their Atom value proposition banks on significant general-purpose area-efficiency advantages over Core (with 4 Gracemont being the same area as one Golden Cove and delivering double the throughput performance, according to Intel in the ADL days.) AVX512 units are chunky and don't contribute to normal-application general-purpose performance. Therefore, they are left out. Could Intel make the penalty smaller by supporting AVX512 on 256b units? Sure, but it still wouldn't be free or close to it - it assumes a whole extra state, extra ISA features for stuff like masks, etc. That stuff isn't free on area and it isn't free on the cost of validation.

Intel has done AVX512 on Atom before, with KNL, because that was a product where it made sense (and one they severely mismanaged.)
 
  • Like
Reactions: Tlh97 and Saylick
Jul 27, 2020
26,020
17,952
146
Intel not doing AVX512 on Atom has little to do with being "wimpy" and everything to do with the fact that their Atom value proposition banks on significant general-purpose area-efficiency advantages over Core (with 4 Gracemont being the same area as one Golden Cove and delivering double the throughput performance, according to Intel in the ADL days.) AVX512 units are chunky and don't contribute to normal-application general-purpose performance. Therefore, they are left out. Could Intel make the penalty smaller by supporting AVX512 on 256b units? Sure, but it still wouldn't be free or close to it - it assumes a whole extra state, extra ISA features for stuff like masks, etc. That stuff isn't free on area and it isn't free on the cost of validation.

Intel has done AVX512 on Atom before, with KNL, because that was a product where it made sense (and one they severely mismanaged.)
Sorry but you are making yourself sound like an Intel spokesperson. What about Zen4c/5c having AVX-512? How does it make sense for them having it?

Just admit it. Intel is too lazy and too greedy to promote their own technology to consumers and leave it to the maverick competitor to do so. Furthermore, they did not work AT ALL to promote its adoption in consumer applications whereas with other stuff like NPUs/GPUs, they bend over backwards trying to make sure developers use or optimize for them. Intel not promoting AVX-512 has more to do with their greed for enterprise money and Gracemont not having it is mainly because Gracemont was never intended originally to be part of Intel's first commercially successful heterogeneous CPU. They were FORCED by someone (maybe frickin' Pat or his immediate numbnut predecessor) to marry Gracemont to their power hungry Golden Cove, by any means necessary. And the decision to disable AVX-512 was purely to avoid admitting their mistake of creating a loser solution to the problem of not having enough core counts to compete with AMD.

Gracemont isn't even a child of anything remotely resembling an Atom core. It's a frickin' Skylake analogue. If you want to call something the descendant of Atom, let it be the LP E-cores in Meteor Lake. No Atom AFAIK consumed as much power as Gracemont.
 

SarahKerrigan

Senior member
Oct 12, 2014
735
2,036
136
Sorry but you are making yourself sound like an Intel spokesperson.

Well, I am well-known for being zealously pro-Intel. (/s, obviously.)

What about Zen4c/5c having AVX-512? How does it make sense for them having it?

Because they didn't settle on a boneheaded heterogeneity strategy that assumed things like "one Atom core = one GNC thread = 2x GNC area-efficiency" among other things. Their process is better. They have a normal-people SoC flow that goes against the way Intel has done things for the last zillion years.

Just admit it. Intel is too lazy and too greedy to promote their own technology to consumers and leave it to the maverick competitor to do so.

That must be it. Intel design engineers are just sitting on their butts. Their roadmap has nothing to do with the fact that Intel has severe institutional problems way more complex than "greedy and lazy."

Let me tell you an Intel story. They were laboring upon a multicore processor with some new I/O tech. The core itself was nothing remarkable - a simple rev of what had come before. It was, however, supposed to be a product of three different design centers that hadn't worked together before.

They didn't play nicely with each other - different siloes, different people, different ways of doing things - and it ended up three years late, by which time it was completely uncompetitive. This processor's cousin, which used the same I/O tech, ended up being entirely canceled because the new design group that was created to work on it was poorly managed. A bunch of people got laid off.

That is Intel. Not "they're just sitting around cackling while counting their sacks of cash and mocking the poor rubes who are deprived of AVX-512."

Gracemont isn't even a child of anything remotely resembling an Atom core. It's a frickin' Skylake analogue. If you want to call something the descendant of Atom, let it be the LP E-cores in Meteor Lake. No Atom AFAIK consumed as much power as Gracemont.

Gracemont is a clear and direct derivative of Tremont, which itself has a clear lineage from Goldmont. You would know this if you spent more time learning about microarchitecture and less time ranting on forums.
 

DigDog

Lifer
Jun 3, 2011
14,449
2,874
126
These comments? https://forums.anandtech.com/thread...m-cpu-market-stagnation.2616907/post-41135729

Please elaborate how the last gen Intel CPUs are not covered by your comments.

Their 9950X ES is already eating 300W@Unlimited power. That should be more than enough for 40 threads and possibly even for 48 threads since the Zen5c CCD isn't going to eat power like the fat CCD.

If Pat weren't such a feeble old CEO, we would've seen a 40 core K or KS CPU from Intel by now because a CEO with enough testosterone would ABSOLUTELY DEMAND it!
yeah ..

1. idk, maybe this wasn't on your radar, but - random video i googled -
13th and 14th gen intels are failing anywhere between 50% to 100% in server environments, and substantially even in consumer envs.
2. your second quote, i have to say, sounds actually like a Intel aficionado; MOAR POWER !! STRONKER COARS !! BURN THE CPU !!
respect.
 
  • Haha
Reactions: igor_kavinski
Jul 27, 2020
26,020
17,952
146
Let me tell you an Intel story. They were laboring upon a multicore processor with some new I/O tech. The core itself was nothing remarkable - a simple rev of what had come before. It was, however, supposed to be a product of three different design centers that hadn't worked together before.

They didn't play nicely with each other - different siloes, different people, different ways of doing things - and it ended up three years late, by which time it was completely uncompetitive. This processor's cousin, which used the same I/O tech, ended up being entirely canceled because the new design group that was created to work on it was poorly managed. A bunch of people got laid off.
That story is meaningless to me unless you tell me which CPU it was that arrived three years late (Lakefield?) and the codename of the cousin (Lakefield had a cousin??? Darn it!).

Gracemont is a clear and direct derivative of Tremont, which itself has a clear lineage from Goldmont. You would know this if you spent more time learning about microarchitecture and less time ranting on forums.
OUCH! Well, I suck at learning and you wouldn't have the patience to teach me, let alone the charitable inclination :p

BUT, I think you mean this Tremont being compared to a similar Gracemont alternative: https://ark.intel.com/content/www/us/en/ark/compare.html?productIds=212329,231803 ?

Both are based on similar process (ok maybe the GMont process is slightly more refined) but the GMont has turbo boost enabled and has more cache yet it has the same TDP??? Intel is clearly LYING here because: https://chipsandcheese.com/2021/12/21/gracemont-revenge-of-the-atom-cores/

1721951128725.png

GMont may be a direct derivative of Tremont but it is too muscular and power hungry to be considered an Atom. Saying GMont is Atom is like saying C++ is like C when the former is a massively bloated mess.

Sarah, Sarah, Sarah. I think I can agree with you on many things but GMont being Atom ain't one of them.
 

SarahKerrigan

Senior member
Oct 12, 2014
735
2,036
136
That story is meaningless to me unless you tell me which CPU it was that arrived three years late (Lakefield?) and the codename of the cousin (Lakefield had a cousin??? Darn it!).

It was a server CPU. I won't say which, but I suspect some folks here could puzzle it out.

OUCH! Well, I suck at learning and you wouldn't have the patience to teach me, let alone the charitable inclination :p

BUT, I think you mean this Tremont being compared to a similar Gracemont alternative: https://ark.intel.com/content/www/us/en/ark/compare.html?productIds=212329,231803 ?

Both are based on similar process (ok maybe the GMont process is slightly more refined) but the GMont has turbo boost enabled and has more cache yet it has the same TDP??? Intel is clearly LYING here because: https://chipsandcheese.com/2021/12/21/gracemont-revenge-of-the-atom-cores/

View attachment 103864

GMont may be a direct derivative of Tremont but it is too muscular and power hungry to be considered an Atom. Saying GMont is Atom is like saying C++ is like C when the former is a massively bloated mess.

Sarah, Sarah, Sarah. I think I can agree with you on many things but GMont being Atom ain't one of them.

i mean, Atom is a name for a uarch family. It isn't a generic term for "core power under 3W" or whatever. Every descendant of Silvermont has been a clear evolution of its predecessor.

ADL-N shows that at reasonable clocks, Gracemont power dissipation is fine. In mainline ADL it got run at clocks way past its sweet spot because Intel needed a single Gracemont to match a single thread of a GNC to avoid making scheduling unnecessarily hard.
 
  • Like
Reactions: Tlh97 and Hitman928

MadRat

Lifer
Oct 14, 1999
11,967
280
126
I wish we would see a decent supply of DDR5 laptops in the budget range. When I search for DDR5 laptops by specific processor Google and places like New Egg bury them under DDR4 listings. Zen5 Ryzen3's actually look pretty kick ass in reviews. But they are largely vaporware.
 
Jul 27, 2020
26,020
17,952
146
I mean, Atom is a name for a uarch family. It isn't a generic term for "core power under 3W" or whatever. Every descendant of Silvermont has been a clear evolution of its predecessor.

No Gracemont CPU in that list.


The Atom branded Gracemont there with 6W TDP is dual core while the quad core has 12W TDP.

Power consumption of Gracemont has clearly ballooned to the point that it does not resemble its immediate predecessor. But I guess you "win" because Intel is still calling it Atom. Basterds.
 

SarahKerrigan

Senior member
Oct 12, 2014
735
2,036
136

No Gracemont CPU in that list.


The Atom branded Gracemont there with 6W TDP is dual core while the quad core has 12W TDP.

Power consumption of Gracemont has clearly ballooned to the point that it does not resemble its immediate predecessor. But I guess you "win" because Intel is still calling it Atom. Basterds.

8c 1.8GHz Gracemont in 15W TDP is entirely in line with past Atoms. Better than a lot of them, even. Its perf/W is drastically better than that of Tremont.
 
Jul 27, 2020
26,020
17,952
146
8c 1.8GHz Gracemont in 15W TDP is entirely in line with past Atoms. Better than a lot of them, even. Its perf/W is drastically better than that of Tremont.

Overall, this is more power than the older N5105 units, but we are also getting a lot more performance.

Intel didn't miraculously increase performance at same power levels. Everywhere I look, it says more power consumption.
 

SarahKerrigan

Senior member
Oct 12, 2014
735
2,036
136



Intel didn't miraculously increase performance at same power levels. Everywhere I look, it says more power consumption.

That says 2-3x higher perf. Looking at their previous review for N5105 systems here, even with the most charitable possible interpretation (36W vs 24.5W) that means Gracemont does 100-200% higher perf in 50% higher dissipated power.

I'ma stand by "perf/W drastically better than that of Tremont", thanks.
 

H433x0n

Golden Member
Mar 15, 2023
1,224
1,606
106
Intel pulled it from their CPUs to stop almost every gamer from turning off E-cores (which they still do, ironically).
what? AVX512 isn’t present because it would make the ecores ISA incompatible with the pcores. It’s not a conspiracy theory, I don’t think Intel cares if you disable the ecores on a product you’ve already purchased from them.
 
  • Like
Reactions: MS_AT

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
27,094
16,014
136
what? AVX512 isn’t present because it would make the ecores ISA incompatible with the pcores. It’s not a conspiracy theory, I don’t think Intel cares if you disable the ecores on a product you’ve already purchased from them.
avx-512 is not even avail for the lasdt 2 generations, why are we still talking about this ???
 
Jul 27, 2020
26,020
17,952
146
avx-512 is not even avail for the lasdt 2 generations, why are we still talking about this ???
Apparently some people are still confused why exactly Intel disabled AVX-512.

I guess there were WIDESPREAD reports of applications bluescreening Windows when AVX-512 threads got migrated onto E-cores yet the media failed to report that. /s

A secret: Intel was deathly scared that someone would figure out through hacking how to run BOTH E-cores and P-cores with AVX-512 enabled and then devise some sort of software trickery to keep AVX-512 threads from being bounced over to E-cores thus maintaining perfect compatibility for all applications. Intel did NOT want that because then the solution wouldn't come from inside Intel. It would make Intel look really bad. Intel had to do something to destroy any chance of such a PR nightmare happening. So they rolled out an urgent and mandatory irreversible microcode update to disable AVX-512 on all existing and future Alder/Raptor Lake CPUs. Intel now sleeps in peace, only awakened from time to time by the horrific voices of users cursing them when their Unreal Engine 5 based games crash. It's less of an issue than being made to look like fools who couldn't figure out on their own how to make AVX-512 and E-cores co-exist.
 

H433x0n

Golden Member
Mar 15, 2023
1,224
1,606
106
Apparently some people are still confused why exactly Intel disabled AVX-512.

I guess there were WIDESPREAD reports of applications bluescreening Windows when AVX-512 threads got migrated onto E-cores yet the media failed to report that. /s

A secret: Intel was deathly scared that someone would figure out through hacking how to run BOTH E-cores and P-cores with AVX-512 enabled and then devise some sort of software trickery to keep AVX-512 threads from being bounced over to E-cores thus maintaining perfect compatibility for all applications. Intel did NOT want that because then the solution wouldn't come from inside Intel. It would make Intel look really bad. Intel had to do something to destroy any chance of such a PR nightmare happening. So they rolled out an urgent and mandatory irreversible microcode update to disable AVX-512 on all existing and future Alder/Raptor Lake CPUs. Intel now sleeps in peace, only awakened from time to time by the horrific voices of users cursing them when their Unreal Engine 5 based games crash. It's less of an issue than being made to look like fools who couldn't figure out on their own how to make AVX-512 and E-cores co-exist.
Wait, I thought the conspiracy was that Intel didn't want you to disable ecores? Which is it?
 
Jul 27, 2020
26,020
17,952
146
Wait, I thought the conspiracy was that Intel didn't want you to disable ecores? Which is it?
On a superficial level, they don't want the E-cores to be disabled coz it throws a shade on their hard work and makes the CPU weaker in MT workloads.

At a deeper level, what I said in the previous post about them (certain engineering heads) not wanting an AVX-512+E-core co-existence solution getting worked out by the community through some hack.

I just thought of a new serious drawback for Intel in exposing AVX-512: their 16 AVX-512 threads getting pounded in benchmarks by 32 7950X threads. So they disabled the ability to make such comparisons rather than be mocked publicly by review sites.
 

MS_AT

Senior member
Jul 15, 2024
738
1,489
96
On a superficial level, they don't want the E-cores to be disabled coz it throws a shade on their hard work and makes the CPU weaker in MT workloads.

At a deeper level, what I said in the previous post about them (certain engineering heads) not wanting an AVX-512+E-core co-existence solution getting worked out by the community through some hack.

I just thought of a new serious drawback for Intel in exposing AVX-512: their 16 AVX-512 threads getting pounded in benchmarks by 32 7950X threads. So they disabled the ability to make such comparisons rather than be mocked publicly by review sites.
I think the simplest explanation works best in this case. They disabled the option so they would have less stuff to validate and debug in case of problems, right now they don't need to check if AVX works on the off-chance that somebody might want to use it, when majority don't even know what it is. This wouldn't show up in gaming benchmarks, would not affect Cinebench etc. Like right now AMD is winning in decompression benchmarks for years, and does anyone care, mocks intel for that etc.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
Does not matter. Gimped cache is gimped performance and games love cache.
But games do not care much about more cores.
6c/12t full cores can run all games out there without bottlenecking a 4090.
8c/16t give a decent reserve of CPU resources.
By the time games start using more than 8 cores to any significant degree, you will be better buying whatever is state of the art then than running some 8+ cores CPU from 2024/2025.