I think that obsession exists only in certain members' heads.Genuine question anyway: what's with the obsession with 8+32? Currently AMD are competitive with 16+0 against 8+16, or 32 threads to 32 threads. I don't see why they also wouldn't be competitive with say 8+16 vs 8+32 on Intel's side (or both 48 threads) in the future?
I think that obsession exists only in certain members' heads.
I don't get it, if you want more nT performance we have better options now. What's so important about having them in a client package where they will be memory starved?
People don't know what they want until they have it, MT performance is relevant every day but because 99% of people don't spend money on premium products they think its a niche case... until years later when the high end tech finally trickles down to mid range and low end.Hobbyists who want to play with a lot of MT performance but they don't actually make any money from it so don't want to spend a lot. IE a super tiny niche of the market.
The memory bandwidth limitation will certainly be serious for many tasks, however Turin Dense will essentially have exactly the same memory bandwidth per core (I'm expecting it to support DDR5-6000 (PC5-48000) otherwise it's quite a bit less) so there must beworkloads that benefit from it.On a serious note though in a lot of workloads Zen 5 is going to be memory bandwidth bound. Zen 4 already is in some of the most rigourous workloads, and even in some games etc you can see benefits to more memory bandwidth (see: launch day Starfield).
They'll have to rev it to LPCAMM3 to support LPDDR6. It has different channel width, additional signals, etc. so it can't use the same module.
SMT4 baybeeeee
So you have no source, meaning your claim is moot.
The current JEDEC target date for the DDR6 spec is "mid-2025". It takes time from spec release to go to mass production that can support a mass-market release; about 16 months from DDR5 to Alder lake, but that was abnormally short because the spec release was repeatedly delayed, and chips were ready very quickly after the spec. A more normal transition is the DDR4 one, where it took two years from spec release to Haswell-E, and more than that for high-volume products being shipped.Some say 2025/2026. And Zen6 and ArrowLakeRefresh is when?
That's not how cache coherency works on AMD parts.
On L3 miss you probe the c/sIOD directory and bang!
I agree that high core count computing is highly relevant to our daily lives (in the postindustrial parts of the world). But I qualify that these computing applications generally involve the processing of large amounts of data. (And tend to happen remotely from the end user.) — In contrast, high core count computing on small data exists but is irrelevant in most people's lives. — And yes, large dataset processing is trickling down to client computing all the time. But not by means of, say, a CPU with 32 very wide cores sitting on an LGA 1718 socket (edit, let alone a 1140 contacts BGA, for that matter).People don't know what they want until they have it, MT performance is relevant every day but because 99% of people don't spend money on premium products they think its a niche case... until years later when the high end tech finally trickles down to mid range and low end.
That's really really is not how cache coherency works!Like I said, it would take some advanced algorithm
I can safely say that they don't.I bet AMD has been working on something
On topic, there has been at least one poster in the Zen 5 speculation thread claiming that Zen 6 sticks with ≤16 cores (and ≤8 cores per CCX) in the client segment.
Not in client.Hopefully we get 12-16 core CCXs in single CCD with Zen 6.
That's really really is not how cache coherency works!
I can safely say that they don't.
Abandon all hope.
The whole idea of CCX is that they don't interact.This would also help CCXs inside mobile chips
Three thoughts:And even if 16 core CCDs never go t mainstream desktop they still sadly may be dual 8 core CCX CCDs anyways on Threadripper and EPYC so not even an option to pay up and get more than 8 big cores per CCX?
Three thoughts:
- Dual-CCX chiplets made sense for a) Zen 1…2 with their small 4-core CCXs, and b) for Bergamo with its smallish 8-dense-core halved-L3$ CCXs.
Vice versa, there appears little reason to put two 8-large-core full-fat-L3$ CCXs onto a single chiplet. Maybe I am missing something though.- Maybe they'll make a CCD with a single 16-large-core full-fat-L3$ CCX for database servers?
- 8-core CCXs actually seem fine to me for many workstation and HPC server uses (also a variety of other less compute oriented server uses) if these limited core count CCXs mean that the caches are fast and energy efficient. The latter has been true ever since Zen 1, if I am not mistaken.
The whole idea of CCX is that they don't interact.
If you want large shared LLC, MALL is your friend.
It's a shared LLC. Gets the job done.but not so great between 2 CCDs or CCXs.
That's why they're private!L3 on CCD or on V-Cache is just much faster than MALL
Needlessly complicated for no gains?just of the L3s being a victim cache of each other
Yes. FOR the 2nd CCD.But the beauty is that the SRAM is already there, on the 2nd CCD
He wants Cache Communism.They're private caches.
Like you can't share them, snoop and probe and tag check happens on the IOD.
Gotta double dip.X3D for gamers
Desktop HPT exists you know.The APU with a tiny IGP but a giant NPU cuz AI AI AI for OEMs, at least until AI hype ends.
Sonoma Valley is next year, yes.Small core product at Samsung/whatever is the cheapest viable node for the cheapskates
Somewhere H1'26.So when does Zen 6 launch?
So about 22 months again. And outside of the AM5 "support through 2025" statement. Yeah...Somewhere H1'26.
They extended it to 2027+ on this very keynote.And outside of the AM5 "support through 2025" statement.
Zen6/RDNA5 hopium activated.Somewhere H1'26.