Discussion Anyone else bored out of their mind due to mainstream CPU market stagnation?

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Wolverine2349

Senior member
Oct 9, 2022
525
179
86
I think only Intel is struggling. AMD is just keeping their cards close to their chest. They could've blown Intel off the x86 landscape with a much higher performing Zen 5 design but chose not to, just as they don't feel any pressure to increase desktop CPU threads above 32. Think about it. Can there really be any serious technical reason why we don't have a 9990X with Zen 5 fat CCD and Zen 5c 16-core CCD, for a total of 48 threads? Heck, OK so they don't want their TR sales getting impacted. How about they just disable some cores on the Zen5c CCD for a total of 40 threads?

Nothing anyone says is going to convince me that AMD can't do it.

Their 9950X ES is already eating 300W@Unlimited power. That should be more than enough for 40 threads and possibly even for 48 threads since the Zen5c CCD isn't going to eat power like the fat CCD.

If Pat weren't such a feeble old CEO, we would've seen a 40 core K or KS CPU from Intel by now because a CEO with enough testosterone would ABSOLUTELY DEMAND it!

Well I want a non-hybrid design homogenous design with ore than 8 cores on a single CCX-CCD/ring bus/tile/die. AMD does not have that even on Threadripper at least not with Big non C cores.

Is there anything stopping AMD from having Big Zen 5 cores more than 8 such as 12 cores on a single CCX within a single CCD?? That is more where I am looking at progress and what I really want to see.

I really want more than 8 cores on a single CCX within a single CCD form AMD or from Intel more than 8 core son a single ring bus or single tile. And that is of a homogenous design and not Big.Little with 2 different cores types.

I know AMD does not want to cannibalize Threadripper sales, but even their Threadripper with tons of cores, but still maxes at 8 cores per CCX within a CCD. They just have a bunch of CCDs in them with 8 cores per. Excellent productivity and server and VM hosting monsters, But for gaming and latency sensitive stuff, unneeded and even for the worse.

Does AMD even have ability to create more than 8 cores on a CCX within a single CCD? I mean if they do they could do that without wrecking Threadripper sales by just putting one 12 core CCX-CCD inside consumer grade and the Threadripper.

It does appear AMD does have more than 8 cores per CCX within CCD with Zen 5C and Zen 6C, but those are little gimped cache and clock speed e-cores not big cores.




I want more big cores per CCX within a CCD. I am fine if they stay with 16 maximum total big cores on desktop platform, but really want more within a CCX within a single CCD.

To me it almost feels as if AMD does not do it on purpose not because they are not ok with 16 cores per CCX within a CCD, but rather they want to sell 8 core CPUs and it seems there would be no reliable way to disable enough effective cores on 16 core CCX-CCDs to make 8 core parts while being costs effective.

Heck AMD since Zen 3 through Zen 5 and probably Zen 6 as well (though Zen 6 remains to be seen) has had only 8 core CCXs within a single CCD. And they have either had fully working ones with 8 cores or 2 disabled cores which make 6 core ones. Nothing in between like more disabled cores on the 8 core CCX-CCDs. They even shove those parts in Threadriper (just a bunch ore for so many more cores) all same stepping probably because its cheapest to not have more die designs. They just limit desktop to 2 CCDs while being to stuff a whole bunch of 8 core CCDs in the bigger Threadripper CPUs.

AMD needs to stop being cheap and create more stepping's like Intel did with Comet Lake and Alder Lake (8+8 and 6+0 with Alder and 10 core die and 8 core die with Comet) and appears they are going to do with Arrow Lake and Bartlett Lake. Then AMD could thus max out 16 cores on desktop while creating a single CCD with more than 8. But they won't because they are too cheap or have constraints form TSMC and their design???
 
Last edited:

CakeMonster

Golden Member
Nov 22, 2012
1,630
809
136
I also agree that 16 cores is fine for now, it will probably be fine for Z6 as well, so until 2027/2028 which is when we'd expect Z7 to arrive. I kind of doubt they will change the number of cores per CCX/CCD before that. And there is a chance they might also keep it at 8 and make it even smaller, and add an additional C-core CCD instead. The smaller the dies, the more efficiently they use their transistor budget and the more flexible they can be (money!). And big cores have proven to be very useful looking at Apple and Intel, so I suspect they'll rather grow the big cores in size going forward rather than adding more of them.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
27,250
16,108
136
It does appear AMD does have more than 8 cores per CCX within CCD with Zen 5C and Zen 6C, but those are little gimped cache and clock speed e-cores not big cores.
Zen 4c and Zen 5c are not anywhere close to e-cores. Yes, less cache and less speed, but still FULL logic including avx-512.
 

GTracing

Senior member
Aug 6, 2021
478
1,114
106
Well I want a non-hybrid design homogenous design with ore than 8 cores on a single CCX-CCD/ring bus/tile/die. AMD does not have that even on Threadripper at least not with Big non C cores.

Is there anything stopping AMD from having Big Zen 5 cores more than 8 such as 12 cores on a single CCX within a single CCD?? That is more where I am looking at progress and what I really want to see.

I really want more than 8 cores on a single CCX within a single CCD form AMD or from Intel more than 8 core son a single ring bus or single tile. And that is of a homogenous design and not Big.Little with 2 different cores types.

I know AMD does not want to cannibalize Threadripper sales, but even their Threadripper with tons of cores, but still maxes at 8 cores per CCX within a CCD. They just have a bunch of CCDs in them with 8 cores per. Excellent productivity and server and VM hosting monsters, But for gaming and latency sensitive stuff, unneeded and even for the worse.

Does AMD even have ability to create more than 8 cores on a CCX within a single CCD? I mean if they do they could do that without wrecking Threadripper sales by just putting one 12 core CCX-CCD inside consumer grade and the Threadripper.

It does appear AMD does have more than 8 cores per CCX within CCD with Zen 5C and Zen 6C, but those are little gimped cache and clock speed e-cores not big cores.




I want more big cores per CCX within a CCD. I am fine if they stay with 16 maximum total big cores on desktop platform, but really want more within a CCX within a single CCD.

To me it almost feels as if AMD does not do it on purpose not because they are not ok with 16 cores per CCX within a CCD, but rather they want to sell 8 core CPUs and it seems there would be no reliable way to disable enough effective cores on 16 core CCX-CCDs to make 8 core parts while being costs effective.

Heck AMD since Zen 3 through Zen 5 and probably Zen 6 as well (though Zen 6 remains to be seen) has had only 8 core CCXs within a single CCD. ANd they have either had fully working ones with 8 cores or 2 disables cores whcih make 6 core ones. Nothing in between. They even shove those arts in Threadripepr all same stepping probably because its cheapest to not have more die designs. They just limit desktop to 2 of them while being to stuff a whole bunch of 8 core CCDs in the bigger Threadripper CPUs.

AMD needs to stop being cheap and create more stepping's like Intel did with Comet Lake and appears they are going to do with Arrow Lake and Bartlett Lake. Then they could max out 16 cores on desktop while creating a single CCD with more than 8. But they won't because they are too cheap or have constraints form TSMC and their design???
I'm curious what's your use case for 12 big cores with low intercore latency? Is there a specific program you use that has trouble with e cores or multiple CCXs?
 

Wolverine2349

Senior member
Oct 9, 2022
525
179
86
Zen 4c and Zen 5c are not anywhere close to e-cores. Yes, less cache and less speed, but still FULL logic including avx-512.

Does not matter. Gimped cache is gimped performance and games love cache. Yeah they are much better than Gracemont e-cores.

Though Intel with Skymont has a big IPC jump on par with Raptor Cove but only in non latency sensitive apps. AMD same thing with Zen 4C and 5C.

Unless Zen 5C can match regular Zen 4 in clocks and raw IPC and latency and AMD sells a 12-16 core single CCX-CCD Zen 5C CPU on AM5?
 

Wolverine2349

Senior member
Oct 9, 2022
525
179
86
"e-core" is a meaningless term anyway.


Kind of is. It was just a term used to describe the weaker cores in a heterogenous hybrid design.

Though if Skymont e-cores really can match Raptor Cove IPC and can be made to have good latency, put them in a single CPU themselves and you have Raptor Cove more than 8 P core Raptor Lake equivalent with homogenous design. Then they are not e-cores anymore.
 
  • Like
Reactions: igor_kavinski
Jul 27, 2020
28,100
19,174
146
Then AMD could thus max out 16 cores on desktop while creating a single CCD with more than 8. But they won't because they are too cheap or have constraints form TSMC and their design???
Yeah, I think they are being "cheap" in the sense that they don't want to increase the CCX size as that would impact yield. If suppose they went with 10 core CCX, they would either need to upgrade the Ryzen 5 SKU to 8 cores and Ryzen 7 to 10 cores or they would need to disable 4 cores in the CCX for a Ryzen 5 SKU which would understandably be very wasteful if suppose 9 or even all 10 cores are perfectly functional and they still need to make Ryzen 5 CPUs with 6 core CCX to capture sales from the lower end of the market. The solution would be to have separate 8 and 10 CCX based CCDs but AMD clearly doesn't want to do that otherwise they would've done it already. The problem to me seems to be Intel. AMD already did their part by moving us from max 20 thread desktop CPUs in the Comet Lake era to 32 thread CPUs. Intel decided not to up the ante and play fire with fire. With no competition to force them, AMD has no good reason to give us more cores/threads on desktop. As many others have said here before, AMD is not a charity organization. Personally, I think they are coz they give so much away compared to Intel but there's a limit to how much they can do that. They have to answer to their shareholders too. It is up to Intel to move the needle and the rumored 40 thread (or cores?) Beast Lake would be a positive step in disrupting the desktop space. It just sucks that Intel hasn't moved it up their priority list so far.
 

Wolverine2349

Senior member
Oct 9, 2022
525
179
86
I'm curious what's your use case for 12 big cores with low intercore latency? Is there a specific program you use that has trouble with e cores or multiple CCXs?


Heavily threaded games that do not like hybrid design. Best set and forget it solution for all games past 5-10 years and future gams as well.

No dealing with Process Lasso and APO and such.

Some may say I hate progress and just do not want change. Well I am kind of that way. Times are a little different now than they were 20 years ago or even 10 years ago. For such a long time we have had excellent compatibility within he X86 Windows NT based SMP ecosystem that programs have worked with no scheduling quirks the last 10 years or even longer across all different CPU advancements and more core counts as Intel always kept up to 8 cores on same node.

20 years ago in 2004 that was far form the case. The difference between 1994-1997 software and 2003-2004 software and games was so much more thna 2014 software and 2024 software. I mean there was the jump form DOS based Windows to the totally new Windows NT platform in 2000/XP. We have been on NT for over 20 years in an SMP ecosystem and for so long I hate to break compatibility when it has bene so strong and good for so long unlike early 2000s and prior days where the jump form single tasking DOS based Windows with no SMP awareness to Windows NT was a necessity even with pain. We have been on Windows NT and PCIe for so long with awesome advancements in CPUs and GPUs and storage no need to break and make compatibility harder this time when its unnecessary. And big.Little at least on desktop is unnecessary.

Dual CCD CPUs are like dual socket back in the day just in one package. In the 4 core gaming eras which as a long while, all quads from Intel Yorkfield on up had them on single die.

And 8 cores too from Intel pretty much always and AMD starting with Zen 3. Most games are fine or more than fine with more than 8 cores. But some are slowly starting to like more than 8 and get marginal benefit from it.
 
Last edited:
  • Like
Reactions: igor_kavinski
Jul 27, 2020
28,100
19,174
146
"e-core" is a meaningless term anyway.
For Intel, it should be w-core. Wimpy cores. Because Intel is such a wimp that they can't manage to design a small efficient core with AVX-512 functionality intact. They made the whole consumer market suffer with loss of AVX-512 for their dang wimpy cores. There SHOULD have been a backlash on their decision to do that. Sadly, Intel buyers are apparently wimps too and they think they have no say in whatever Intel decides to feed them. You do have a say, Intel buyers. Say what you want and need with your mighty dollar!
 

GTracing

Senior member
Aug 6, 2021
478
1,114
106
Heavily threaded games that do not like hybrid design. Best set and forget it solution for all games past 5-10 years and future gams as well.

No dealing with Process Lasso and APO and such.

Some may say I hate progress and just do not want change. Well I am kind of that way. Times are a little different now than they were 20 years ago or even 10 years ago. For such a long time we have had excellent compatibility within he X86 Windows NT based SMP ecosystem that programs have worked with no scheduling quirks the last 10 years or even longer across all different CPU advancements and more core counts as Intel always kept up to 8 cores on same node.

20 years ago in 2004 that was far form the case. The difference between 1994-1997 software and 2003-2004 software and games was so much more thna 2014 software and 2024 software. I mean there was the jump form DOS based Windows to the totally new Windows NT platform in 2000/XP. We have been on NT for over 20 years in an SMP ecosystem and for so long I hate to break compatibility when it has bene so strong and good for so long unlike early 2000s and prior days where the jump form single tasking DOS based Windows with no SMP awareness to Windows NT was a necessity even with pain. We have been on Windows NT and PCIe for so long with awesome advancements in CPUs and GPUs and storage no need to break and make compatibility harder this time when its unnecessary.
Interesting, I didn't know that the multiple CCXs caused that much of a performance impact in games. From what I've seen the average performance hit is a few percent at most. Do you have any reviews or anything you could share?
 

adamge

Member
Aug 15, 2022
125
245
86
For Intel, it should be w-core. Wimpy cores. Because Intel is such a wimp that they can't manage to design a small efficient core with AVX-512 functionality intact. They made the whole consumer market suffer with loss of AVX-512 for their dang wimpy cores. There SHOULD have been a backlash on their decision to do that. Sadly, Intel buyers are apparently wimps too and they think they have no say in whatever Intel decides to feed them. You do have a say, Intel buyers. Say what you want and need with your mighty dollar!
Yes, the "whole consumer market" is suffering bigly from the loss of a feature used by 0.000000001% of machines out there.
 

adamge

Member
Aug 15, 2022
125
245
86
Interesting, I didn't know that the multiple CCXs caused that much of a performance impact in games. From what I've seen the average performance hit is a few percent at most. Do you have any reviews or anything you could share?
Are we talking about the difference between 130 fps and 136 fps here?
 

GTracing

Senior member
Aug 6, 2021
478
1,114
106
Are we talking about the difference between 130 fps and 136 fps here?
Not sure. I have heard of people using process lasso to get better performance in games, but I thought it was not worth the effort. That's why I'm curious to hear if there are games where it matters.

My go to site for comparisons is techpowerup. They show the 7950x practically equal to the 7700x on average.
 

Wolverine2349

Senior member
Oct 9, 2022
525
179
86
Not sure. I have heard of people using process lasso to get better performance in games, but I thought it was not worth the effort. That's why I'm curious to hear if there are games where it matters.

My go to site for comparisons is techpowerup. They show the 7950x practically equal to the 7700x on average.


TechPowerUp does not do 1% lows and 0.1% lows. The 7700X spanks the 7950X and especially the 7900X in 1% and 0.1% lows. Just like at early reviews where disabling one CCD on the 7950X gave it modestly much better gaming performance.
 

Wolverine2349

Senior member
Oct 9, 2022
525
179
86
Interesting, I didn't know that the multiple CCXs caused that much of a performance impact in games. From what I've seen the average performance hit is a few percent at most. Do you have any reviews or anything you could share?


They do big time. I mean Zen 2 had multiple CCXs within a CD and the 4 core part which shaved off 4 cores it had so much better latency and better gaming performance across many games despite less cores.

Zen 3 corrected that. But still tops out at 8 cores per CX and has since.
 
Last edited:
Jul 27, 2020
28,100
19,174
146
Yes, the "whole consumer market" is suffering bigly from the loss of a feature used by 0.000000001% of machines out there.
Just when AMD got the feature and it ensured wider adoption of the feature among application (and possibly game developers), Intel pulled it from their CPUs to stop almost every gamer from turning off E-cores (which they still do, ironically). Anyone who says that the consumer market isn't suffering, does not understand processing of massive amounts of data. If AVX-512 instructions were so useless, a 10% improvement in OCR (a use case with widespread usage) would not be possible for "free": https://www.phoronix.com/news/Tesseract-OCR-5.2-Released
 
Last edited:

GTracing

Senior member
Aug 6, 2021
478
1,114
106
TechPowerUp does not do 1% lows and 0.1% lows. The 7700X spanks the 7950X and especially the 7900X in 1% and 0.1% lows. Just like at early reviews where disabling one CCD on the 7950X gave it modestly much better gaming performance.
They do have 1% lows on some of their reviews. Here's their 7950X3D review which includes the 7950X and 7700X.


At 1080p with a 4090, the 7950X has two percent worse 1% lows than the 7700x does.
 

Wolverine2349

Senior member
Oct 9, 2022
525
179
86
Yeah, I think they are being "cheap" in the sense that they don't want to increase the CCX size as that would impact yield. If suppose they went with 10 core CCX, they would either need to upgrade the Ryzen 5 SKU to 8 cores and Ryzen 7 to 10 cores or they would need to disable 4 cores in the CCX for a Ryzen 5 SKU which would understandably be very wasteful if suppose 9 or even all 10 cores are perfectly functional and they still need to make Ryzen 5 CPUs with 6 core CCX to capture sales from the lower end of the market. The solution would be to have separate 8 and 10 CCX based CCDs but AMD clearly doesn't want to do that otherwise they would've done it already. The problem to me seems to be Intel. AMD already did their part by moving us from max 20 thread desktop CPUs in the Comet Lake era to 32 thread CPUs. Intel decided not to up the ante and play fire with fire. With no competition to force them, AMD has no good reason to give us more cores/threads on desktop. As many others have said here before, AMD is not a charity organization. Personally, I think they are coz they give so much away compared to Intel but there's a limit to how much they can do that. They have to answer to their shareholders too. It is up to Intel to move the needle and the rumored 40 thread (or cores?) Beast Lake would be a positive step in disrupting the desktop space. It just sucks that Intel hasn't moved it up their priority list so far.


Well to be fair in the desktop and laptop space, Intel has been competitive with AMD since Alder Lake release and Raptor Lake release. Since Raptor Lake Inte has 32 threads like AMD. They have 8 cores with HT and 16 less powerful e-cores for 24 cores total and 32 threads on the i9s.

And raw performance and IPC, Raptor Cove is a little bit ahead of Zen 4. Even Golden Cove is on par or maybe 1-2% better IPC despite lower clocks. But Raptor Cove does have higher clocks to match or exceed Zen 4 unlike Golden Cove which GLC has lower clocks than Zen 4 despite equal or 1-2% better IPC.

But the stability disaster of Raptor Lake that has become apparent in the last couple of months. That is what really has made Intel give no competition to AMD. Who cares ir aw performance of Raptor Lake is on par or a little better than Zen 4 when it is degrading or not stable or who knows what is an issue where Zen 4 is reliable in specification.

Though when AMD was designing Zen 5, they did not know that. But AMD answers to Enterprise customers first and is gonna be cheap.

Like you state they are not a charity, though I would pay more dollar for a CPU with more than 8 cores on a CCX with in a CCD. SO they have that going for them but choose not that.

But I guess us DIY market are not big enough of customer base I guess for them to care?? Or they want unsuspecting PC builders to not know about the cross latency CCD penalty for gaming cause many just see more cores and assume better for gaming?

Though DIY market is not as small and niche as some would say because if that were the case why has MIcroCenter which caters to us DIY crowd expanded to Indianapolis, Miami, and Charlotte if DIY was so niche?? DIY and individual CPUs are not as niche as you think. Yes its not bulk of market but it is far from extremely niche or else MicroCenter would not have expanded 3 stores recently.

And Intel is apparently caring about us as they are making a 12 + 0 die for Bartlett Lake or else they would not if we DIY and those who hate or have issues with Big.Little were too unimportant. Intel is giving us a choice finally though we have to wait a year. But it will only matter if its stable and the new die does not inherit Raptor Lake's current 8+16 stepping/die's stability and degradation issues. Though Intel can afford it controlling their 10nm process production wafer where as AMD uses chiplets and is constrained by TSMC. And Intel has more money or both. Though Intel is also using TSMC for Arrow Lake, so maybe Arrow Lake unfortunate will not get a 12 P core variant??
 
Last edited:

SarahKerrigan

Senior member
Oct 12, 2014
735
2,036
136
For Intel, it should be w-core. Wimpy cores. Because Intel is such a wimp that they can't manage to design a small efficient core with AVX-512 functionality intact. They made the whole consumer market suffer with loss of AVX-512 for their dang wimpy cores. There SHOULD have been a backlash on their decision to do that. Sadly, Intel buyers are apparently wimps too and they think they have no say in whatever Intel decides to feed them. You do have a say, Intel buyers. Say what you want and need with your mighty dollar!

Are... are you okay?
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
27,250
16,108
136
Does not matter. Gimped cache is gimped performance and games love cache. Yeah they are much better than Gracemont e-cores.

Though Intel with Skymont has a big IPC jump on par with Raptor Cove but only in non latency sensitive apps. AMD same thing with Zen 4C and 5C.

Unless Zen 5C can match regular Zen 4 in clocks and raw IPC and latency and AMD sells a 12-16 core single CCX-CCD Zen 5C CPU on AM5?
You don't game on anything that has Zen4c or Zen 5c cores, thats servers only. More cores for servers, and the use case does not need cache. Otherwise they get server chips with the full Zen 5 cache.
 

Thunder 57

Diamond Member
Aug 19, 2007
4,035
6,750
136
Well I want a non-hybrid design homogenous design with ore than 8 cores on a single CCX-CCD/ring bus/tile/die. AMD does not have that even on Threadripper at least not with Big non C cores.

Is there anything stopping AMD from having Big Zen 5 cores more than 8 such as 12 cores on a single CCX within a single CCD?? That is more where I am looking at progress and what I really want to see.

I really want more than 8 cores on a single CCX within a single CCD form AMD or from Intel more than 8 core son a single ring bus or single tile. And that is of a homogenous design and not Big.Little with 2 different cores types.

I know AMD does not want to cannibalize Threadripper sales, but even their Threadripper with tons of cores, but still maxes at 8 cores per CCX within a CCD. They just have a bunch of CCDs in them with 8 cores per. Excellent productivity and server and VM hosting monsters, But for gaming and latency sensitive stuff, unneeded and even for the worse.

Does AMD even have ability to create more than 8 cores on a CCX within a single CCD? I mean if they do they could do that without wrecking Threadripper sales by just putting one 12 core CCX-CCD inside consumer grade and the Threadripper.

It does appear AMD does have more than 8 cores per CCX within CCD with Zen 5C and Zen 6C, but those are little gimped cache and clock speed e-cores not big cores.




I want more big cores per CCX within a CCD. I am fine if they stay with 16 maximum total big cores on desktop platform, but really want more within a CCX within a single CCD.

To me it almost feels as if AMD does not do it on purpose not because they are not ok with 16 cores per CCX within a CCD, but rather they want to sell 8 core CPUs and it seems there would be no reliable way to disable enough effective cores on 16 core CCX-CCDs to make 8 core parts while being costs effective.

Heck AMD since Zen 3 through Zen 5 and probably Zen 6 as well (though Zen 6 remains to be seen) has had only 8 core CCXs within a single CCD. And they have either had fully working ones with 8 cores or 2 disabled cores which make 6 core ones. Nothing in between like more disabled cores on the 8 core CCX-CCDs. They even shove those parts in Threadriper (just a bunch ore for so many more cores) all same stepping probably because its cheapest to not have more die designs. They just limit desktop to 2 CCDs while being to stuff a whole bunch of 8 core CCDs in the bigger Threadripper CPUs.

AMD needs to stop being cheap and create more stepping's like Intel did with Comet Lake and Alder Lake (8+8 and 6+0 with Alder and 10 core die and 8 core die with Comet) and appears they are going to do with Arrow Lake and Bartlett Lake. Then AMD could thus max out 16 cores on desktop while creating a single CCD with more than 8. But they won't because they are too cheap or have constraints form TSMC and their design???

Yea, we get it. You've said it before and I don't even know how many times in this one post. This is getting to "3600 x2" levels of silliness. There is no reason for AMD to go to more than 8 per CCX. The majority of chips they sell are 8 cores or less. They would either be disabling cores and wasting resources or they would have to make both say an 8 core and 12 core version, again wasting resources. Not to mention unifiyng the L3 cache would almost certainly increase latency, just like it went up from Zen 2 to Zen 3. And to what end? The 7800X3D sells like hotcakes and does its job well.

Just because you have a "dream product" in mind doesn't mean it makes sense to make it. Hence the reference to the "3600 x2". People have been asking for a giant APU forever. AMD is only now getting around to it with Strix Halo.

Dual CCD CPUs are like dual socket back in the day just in one package. In the 4 core gaming eras which as a long while, all quads from Intel Yorkfield on up had them on single die.

Not even close.
 

cebri1

Senior member
Jun 13, 2019
373
405
136
As a general thought, I think stagnation is due to most consumer needs being fulfilled by even the weakest of CPUs.

I remember when I was a kid back in the 90s and every 12 month a processor would come out that was 50%-100% faster than the previous gen, and software was evolving so quickly that PCs became obsolete rapidly. This is something the smartphone market experienced in the late 00s early 10s, specially outside apple ecosystem.

If you add to that in order to get more transistors on a tiny package cost are skyrocketing it makes total sense that CPU manufacturers, apart from fighting for the irrelevant CB crown and who gets 934fps on Rainbow Six, are focusing more on delivering power efficient + reliable chips for the corporate crowd and average computer user.

Yes, there is still a lot of people who need that increase in performance for HPC workloads, but we are seeing more and more that these workloads are offloaded to GPUs or dedicated HW.

my 2c.
 

Wolverine2349

Senior member
Oct 9, 2022
525
179
86
Yea, we get it. You've said it before and I don't even know how many times in this one post. This is getting to "3600 x2" levels of silliness. There is no reason for AMD to go to more than 8 per CCX. The majority of chips they sell are 8 cores or less. They would either be disabling cores and wasting resources or they would have to make both say an 8 core and 12 core version, again wasting resources. Not to mention unifiyng the L3 cache would almost certainly increase latency, just like it went up from Zen 2 to Zen 3. And to what end? The 7800X3D sells like hotcakes and does its job well.

Just because you have a "dream product" in mind doesn't mean it makes sense to make it. Hence the reference to the "3600 x2". People have been asking for a giant APU forever. AMD is only now getting around to it with Strix Halo.



Not even close.


Well Intel appears to be listening to those who prefer homogenous as they have a 12+0 Bartlett Lake die with all 12 core son a single ring bus coming out in 1 year.

Does it make sense for intel to make it since it does not for AMD to make more than 8 cores on a single die?
 

Thunder 57

Diamond Member
Aug 19, 2007
4,035
6,750
136
Well Intel appears to be listening to those who prefer homogenous as they have a 12+0 Bartlett Lake die with all 12 core son a single ring bus coming out in 1 year.

Does it make sense for intel to make it since it does not for AMD to make more than 8 cores on a single die?

That's still very much a rumor. I see no benefit in Intel making a new die of a soon to be outdated, problematic core. We also don't know what clocks might be before they run into a power limit. I expect ARL and Zen 5 to perform better in just about every scenario. Consoles are limited to 8 Zen 2 cores so I don't know what you think you are missing out on.