• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Question Zen 6 Speculation Thread

Page 303 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Also Doubled L3 per core for the server dense variants which client is not going to get.
That's just dense variants catching up.
And if L3 was that important in server, there'd have been more demand for the -X variants.
Speaking of which, do we think we will get PCIE 6.0 and faster memory support on client? I am thinking yes.
PCIe 6.0 is pointless in desktop for the forseeable future.
Would be a cost driver for no real benefit.

If there's enough VRAM, faster PCIe is useless.
If there's not enough VRAM, PCIe 6.0 would still be too slow to avoid hiccups.

Right now, most games fit into 16GB even at 4K max settings.
By the time that's no longer the case, cards for 4K gaming will have more than 16GB VRAM, while cards with 16GB or less will be too slow for 4K/max settings, regardless of PCIe.

The bw delta between PCIe and VRAM is just too big.
The only thing PCIe needs to be is to be fast enough for CPU2GPU communication, and right now, even PCIe3.0 16x is still sufficient for that, PCIe4.0 16x/PCIe5.0 8x will remain good enough for even longer and PCIe5.0 x16 will remain sufficient for like 10 years or longer.
 
and the poor VRMs are what is slowing it down at that. If my motherboard has better VRM heatsinks it might push the 500 watt limit and the ghz run speed.
My feeling is that the PPT increase from Genoa to Turin, i.e. 400 W -> 500 W for top SKUs, was only halfheartedly implemented by the SP5 board makers. (Most SP5 boards were not revised for that at all.) At least in this regard Venice should have a benefit from getting new sockets (SP7, SP8) for which the board makers and system builders have the electrical and thermal requirements laid out to them right from the start. (And then the same game repeats with Zen 7...)
 
That's nearly 3x in two years. Now granted that's going from Samsung 8nm (more akin to TSMC 12nm than 7nm) to 5nm, so almost 2 node shrinks, but then again the transistors go up nearly 3x not 2.
Ahem. Two node shrinks should equal 4x, not 3x.
Guys are looking desperate for Zen 6 to suck. It’s funny
I don't see indication of that otherwise. So you think saying it could be 1.7x instead of 2x is suck? Not saying I agree with @Abwx, but you seem to be living in a different world.
 
Last edited:
Aren't you mixing up performance increases with transistor counts?

The transistor counts still go up massively, for instance:

1. NVIDIA GA104 chip (RTX 3070 Ti among others) is 17.4 billion transistors at 392 mm²
2. NVIDIA AD103 chip (RTX 4070 Ti among others) is 45.9 billion transistors at 379 mm²

That's nearly 3x in two years. Now granted that's going from Samsung 8nm (more akin to TSMC 12nm than 7nm) to 5nm, so almost 2 node shrinks, but then again the transistors go up nearly 3x not 2.

The same is true regarding CPUs. Intel's transistor counts per core (even without caches) have gone up multiple times since Skylake. If I'm not mistaken, it has pretty much been 2x the transistors for 20% more perf/clock.

And the biggest difference compared to yesteryear is that you can't fire up all those "newly acquired" transistors and expect them to take as little power as the previous ones at the given chip-size (which was always the case till mid 2000s). This is the part that only goes down tens of percent at sweetspot clocks. So you. have to be clever in other ways (use more to get higher clocks bloat buffer sizes, add more cores / features that aren't used together all the time ...)

In the end you might get a bigger transistor budget (as long as it's not analog or SRAM, where gains are minimal) but they are a hell of a lot more expensive and can't be fired up all at once (unless you increase power limits).
In DC, the performance is directly tied to how many cores can clock to the highest level within a power envelope.

As we move into the "silly amount of cores in desktop/laptop) era, this will ALSO be the case.

If the above is true, then maximum performance at max clock is limited by power, not clock speed.

The IPC increases are mostly due to transistor budget increases (more buffers, wider execution, more execution units, more complex branch predictors, etc). Two things are changing here (both bad for us). The first is that the transistor budget increase per generation is going DOWN. The second is that increases in transistor budget are reaching the point of diminishing returns as well.

Soooo, back to my contention. You can do only so much with single core IPC and clock speed to get better performance with minimal transistor increases, but if you can improve efficiency and make a socket with higher power capabilities, you can make MT performance increases much more easily.
 
In DC, the performance is directly tied to how many cores can clock to the highest level within a power envelope.

As we move into the "silly amount of cores in desktop/laptop) era, this will ALSO be the case.

If the above is true, then maximum performance at max clock is limited by power, not clock speed.

The IPC increases are mostly due to transistor budget increases (more buffers, wider execution, more execution units, more complex branch predictors, etc). Two things are changing here (both bad for us). The first is that the transistor budget increase per generation is going DOWN. The second is that increases in transistor budget are reaching the point of diminishing returns as well.

Soooo, back to my contention. You can do only so much with single core IPC and clock speed to get better performance with minimal transistor increases, but if you can improve efficiency and make a socket with higher power capabilities, you can make MT performance increases much more easily.
No one cares about MT capabilities though beyond a certain point. We're not entering the silly amount of cores era. Next gen most parts sold in client will be 8/10/12 core parts. There will be some higher core count options but they are a small part of the market and will remain so. You know this.
 
In DC, the performance is directly tied to how many cores can clock to the highest level within a power envelope.

As we move into the "silly amount of cores in desktop/laptop) era, this will ALSO be the case.

If the above is true, then maximum performance at max clock is limited by power, not clock speed.

The IPC increases are mostly due to transistor budget increases (more buffers, wider execution, more execution units, more complex branch predictors, etc). Two things are changing here (both bad for us). The first is that the transistor budget increase per generation is going DOWN. The second is that increases in transistor budget are reaching the point of diminishing returns as well.

Soooo, back to my contention. You can do only so much with single core IPC and clock speed to get better performance with minimal transistor increases, but if you can improve efficiency and make a socket with higher power capabilities, you can make MT performance increases much more easily.
This seems very logical to me. When you add in that fact that we are cramming more transistors into smaller areas while increasing core count effective, cooling is going to be paramount to extracting all of the performance these new CPUs can deliver.

Not that it is necessary, but I needed a little winter project and went full on custom loop Saturday night and ordered 2 360 radiators, Core 1 waterblock, D5 pump, etc... Should be spilling coolant into my PC later in the week...
 
Last edited:
Ahem. Two node shrinks should equal 4x, not 2x.

I don't see indication of that otherwise. So you think saying it could be 1.7x instead of 2x is suck? Not saying I agree with @Abwx, but you seem to be living in a different world.
I wasn’t being specific about him. Don’t know why it hit a nerve with you, tho. Lmao
 
MDS1 lo is a direct KRK1 replacement and targets below 1k spp

I wonder how much optimization goes into the SoC design, since there are potentially billions of dollars riding on this one SoC.

AMD had all the time in the world and all the resources in the world to make it a 1st class entry into this market segment...
 
NO, MDS1 should be STX replacement.
-hi, yes. -lo, no.
I wonder how much optimization goes into the SoC design, since there are potentially billions of dollars riding on this one SoC
It's a boring nothingburger part that's pretty much a 1:1 replica of KRK with updated IPs and an LP cluster.
Good for the target market, absolutely zero leadership chops otherwise.
AMD had all the time in the world and all the resources in the world to make it a 1st class entry into this market segment...
Is this server? No? Too bad!
 
It's a boring nothingburger part that's pretty much a 1:1 replica of KRK with updated IPs and an LP cluster.
Good for the target market, absolutely zero leadership chops otherwise.

That's what I am afraid of, this sort of attitude about it.

Intel's life depends on their PTL notebook chip, to compete for the $10 billion client market, while AMD seems to be treating this market like it is not a $10 billion market ($40 billion annually)

Is this server? No? Too bad!

Well, there was the "> 40% client share" on AMD slides. Hopefully, AMD acts like they mean it and give this a first-class effort.
 
while AMD seems to be treating this market like it is not a $10 billion market ($40 billion annually)
Because DC TAM they're aiming at is like infinite billions.
Well, there was the "> 40% client share" on AMD slides.
That one they'll get on autopilot.
Hopefully, AMD acts like they mean it and give this a first-class effort.
Oh no my boy that just ain't happening.
But it's gonna be good enough™.
 
I wasn’t being specific about him. Don’t know why it hit a nerve with you, tho. Lmao
Because no one really said Venice is going to suck. And 10-15% gain is directly from AMD slide.

Also, Zen 5 disappointed in performance for many, even those that had more realistic expectations like the 15-20% per gen crowd. So they have learned to be conservative, because not meeting targets is very common in silicon.
That one they'll get on autopilot.
Not if they don't have better products than Intel they won't.
 
Not Zen 6 specific, but Microsoft is launching some kind of AI dealio called "Ignite" tomorrow. Have no idea what it is but I hope its better than Co-Pilot
It's their annual dev conference, the first edition was in 2015. Since the AI is the buzzword, they mention AI in the description. So, no, Ignite is not their new product. Tbh, Copilot could have told you that😉
 
No one cares about MT capabilities though beyond a certain point. We're not entering the silly amount of cores era. Next gen most parts sold in client will be 8/10/12 core parts. There will be some higher core count options but they are a small part of the market and will remain so. You know this.
"No one"?

AMD has clearly stated (on more than one occasion) their architecture is "Server First".

Client will inherit whatever good can be had from this approach; however, you won't be seeing a "Client First" design from AMD.

This is a business strategy they have adopted. They make the money on the high end and force Intel into the unenviable position of providing commodity parts at the low end and OEM level.

Gamers don't care about MT. Business desktop doesn't care about MT (or ST), DC does and so does HEDT .... ironically both markets with very high margins.

AMD seems to be banking on gamers as well though. They have X3D for that.
 
TSM nodes are inherently fmax-focused, especially in the finflex era.

Maybe for Intel.
Other vendors have no such skill issues.
QCOM got 19% fmax off N3X plus some Vmax cranking.
QCOM was not coming from a hyper optimized node like Intel 7 which is just Fmax tuned to max
 
"No one"?

AMD has clearly stated (on more than one occasion) their architecture is "Server First".

Client will inherit whatever good can be had from this approach; however, you won't be seeing a "Client First" design from AMD.

This is a business strategy they have adopted. They make the money on the high end and force Intel into the unenviable position of providing commodity parts at the low end and OEM level.

Gamers don't care about MT. Business desktop doesn't care about MT (or ST), DC does and so does HEDT .... ironically both markets with very high margins.

AMD seems to be banking on gamers as well though. They have X3D for that.
Exactly, no one in client. Direct response to the "silly amounts of cores on desktop era of laptop and desktop comment".

That era doesn't exist. No one wants that. So we're back to what's important: clock speed and ipc for desktop and a combination and that and battery life for mobile
 
Back
Top