• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Question Zen 6 Speculation Thread

Page 15 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
They will likely increase clocks, basically every designer going to N3 are getting clock bumps. Getting the IPC uplifts is harder, so higher frequencies seems like a path of lesser resistance until N3 perf wall is hit.

But then who knows with AMD bean counters. They might optimize for density and make the chips smaller.
They are a software company now, except that they are not particularly great at SW either.
 
Last edited:
Doubt there will be increased clock speed. What's more likely to help Zen 6 is IOD and GMI redesign.
For me the SoC design is what is more interesting, been a long while we have been discussing about the GMI replacement.
Also the supposed unification of DT and Laptop as client would be nice with product lines differentiated during packaging.
 
Last edited:
So how will fare Zen6 ST uplift?

Smaller IPC gain than Zen4 -> Zen5, but coupled with clock speed boost thanks to N2...

The huge question is how the memory subsystem works (L3 + memory controller), and other than vague "it's totally different", we really know very little of it. If average latencies go down a lot, performance can improve a lot in games and other software that really benefits from it. But we really know nothing.
 
If medusa halo is supposed to be the successor to strix halo, and Zen 5 "strix halo" CCDs can be easily swapped with Zen 6 "medusa ridge" CCDs, keeping the same strix halo IOD, we might get Medusa Halo relatively quickly.
Because it seems that Future Ridge processors feature the same fan-out links as strix halo.
 
Last edited:
The rumor that was passed around on this forum was that Zen6 was more about the package stacking/connection tech upgrade and that Strix Halo was effectively a test product for part of that. Reading about the CCDs essentially being swappable maps to that well.
 
What will the Kraken Point counterpart of Zen6 be called (if there is even one)?
With the Clash of the Titans theme of Kraken and Medusa maybe it's called Calibos or something similar 😅

Or it could just be Greek monster themed in general.

Like Typhon, Scylla, Charybdis or Cerberus to name just a handful.
 
Hydra is obviously another big Greek monster name.

I think Orochi the 8 headed monster from Japanese Shinto myth was the codename for the 1st Bulldozer DT/server die.
 
The rumor that was passed around on this forum was that Zen6 was more about the package stacking/connection tech upgrade and that Strix Halo was effectively a test product for part of that. Reading about the CCDs essentially being swappable maps to that well.
Perhaps Strix Halo being slotted between Zen 5 and 6 is not so much about testing, but more about time to market.
 
One thing I would like to emphasize is the amount of die area that can be saved on the CCD by changing the interconnect technology. The following picture is from the MI300 deep dive:
AMD-Instinct-MI300-Family-Architecture-Chiplet-Reuse.jpg


On the right picture, you can see the SoIC Interfaces represented as two tiny blue boxes.
The currently used GMI links for all of their CPUs are the two big orange blocks left and right of the left box.
Now, although we know, that SoIC is a lot more dense than for example InFO-RDL, even the latter will help them on saving a lot of die area for IO. This could either be used for cost benefits or could (hopefully) be put to good use for performance. And should Zen6 be produced in N3 as expected, these savings are even more significant as in N4.

On that note I am also very interested in comparisons of Zen5 and ARL regarding how much die space gets spent only for interconnects. I would expect ARL to fare better in that regard.
 
One thing I would like to emphasize is the amount of die area that can be saved on the CCD by changing the interconnect technology
Well, ye but no.

Unfortunately the more complicated a data signalling method is to extract greater bit/watt efficiency, the more complicated the circuitry is likely to become and therefore the area it takes up on die.

Best case scenario they can somehow exploit vertical layering in the future to punt the major IO off the main core layer onto its own layer, much as there is a possibility to take the L2 and L3 completely off the main layer of a stack as vertical interconnect density increases.

The other possibility is photonic IO, but it remains out of reach at this level of transistor geometry as the wavelengths of light far exceed the scale of cutting edge transistors making downsizing optical components extremely difficult.
 
Well, ye but no.

Unfortunately the more complicated a data signalling method is to extract greater bit/watt efficiency, the more complicated the circuitry is likely to become and therefore the area it takes up on die.

Best case scenario they can somehow exploit vertical layering in the future to punt the major IO off the main core layer onto its own layer, much as there is a possibility to take the L2 and L3 completely off the main layer of a stack as vertical interconnect density increases.

The other possibility is photonic IO, but it remains out of reach at this level of transistor geometry as the wavelengths of light far exceed the scale of cutting edge transistors making downsizing optical components extremely difficult.
So you are basically saying, that the area saved by much denser bumps is entirely eaten up by more complicated Control-Logic?
Admittedly, I am having a hard time seeing proof for this argument based on existing products using advanced packaging technologies.
Maybe you have a good example to support this?
 
Sure thing.

Just look up 10G-BaseT ethernet.

It's been around for something like 18 years and it still stinks because the complexity of the signalling makes it so expensive per port in power.
How does this compare to the case discussed here - apart from generally also having something to do with communication?
 
How does this compare to the case discussed here - apart from generally also having something to do with communication?
One would assume that increasing the number of interconnect microbumps (or whatever the evolution is called) allow them to widen the IO and hopefully make it less complex.

(as opposed to a super complex scheme over a lesser number of connections - just like 10G-BaseT which has only a handful of pins)

So being able to pack in more microbumps per mm2 is a big advantage.

The more per mm2 they can get, the closer inter-chiplet IO gets to being little different from monolithic intra die metal layer signalling paths, with all the bandwidth and power benefits that implies.
 
Back
Top