Question Zen 6 Speculation Thread

Page 193 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Doug S

Diamond Member
Feb 8, 2020
3,304
5,745
136
? Apple got a 10% frequency increase from N3B

You're assuming they got that all from process, and none from design or from deciding to use more power in single core loads.

You're also ignoring that Apple was working with a much lower clock rate and much lower core power than AMD. Its much easier to add 10% frequency when you're starting at 3.5 GHz than it is when you're starting at 6.
 

Io Magnesso

Senior member
Jun 12, 2025
533
145
71
You're assuming they got that all from process, and none from design or from deciding to use more power in single core loads.

You're also ignoring that Apple was working with a much lower clock rate and much lower core power than AMD. Its much easier to add 10% frequency when you're starting at 3.5 GHz than it is when you're starting at 6.
Do you see 6GHz as the normal operating frequency?
 

Doug S

Diamond Member
Feb 8, 2020
3,304
5,745
136
Do you see 6GHz as the normal operating frequency?

There's some binning in that number that Apple isn't doing, so maybe 5.5 GHz would be a fairer comparison to the 3.5 Apple was operating M2 at before they went to N3B. Either way, there is a HUGE gap in the frequency range in which AMD and Apple chips are operating. It is shrinking, but I stand by my statement that attributing the gains on Apple's side to "process" and assuming AMD should have the same percentage gains from "process" is folly.
 

MS_AT

Senior member
Jul 15, 2024
741
1,496
96
I will add that the ~40% for Zen5 was true for pure heavy AVX-512 only, but it had some truth to it.
Well, my FFT routines got 90% when running hot from cache;)

EPYC Zen 5 comes nowhere near +40% vs EPYC Zen 4, in SPECint or SPECfp overall, so it doesnt matter.
Single core? X cores vs X cores or socket vs socket? Or any of those?

from bigger L3 slightly lower memory latency
How does victim cache lower memory latency?

why wouldn't you expect a lower latency?
In which part of the hierarchy you would expect to see lower latency and what would be the reason for it?

N4P 9950X @ 5.7GHz uses ~35 watts
Core alone or core + IOD?
 
  • Like
Reactions: Io Magnesso

Joe NYC

Diamond Member
Jun 26, 2021
3,242
4,740
136
How does victim cache lower memory latency?

That meant to apply to the other feature of Zen 6 which is the new interconnect between IOD and CCD and potentially faster memory support.

But for single thread apps, 1.5x size of L3 will reduce cache misses, which is its way of reducing latency.
 
  • Like
Reactions: BorisTheBlade82

Det0x

Golden Member
Sep 11, 2014
1,455
4,948
136
The price you pay for minimal IPC gains. This is why @gdansk is right while increasing clock is the easiest way to increase performance of a core it comes at the cost of power consumption.

View attachment 127064
N4P 9950X @ 5.7GHz uses ~35 watts. Will AMD maintain this level of power consumption at >6.0GHz?
A single PBO boosting Zen5 core @ ~5.85ghz uses 11.1w average in Cinebench R23 ST according to hwinfo

Totalt PPT is ~50w
1752406570657.png
 

Attachments

  • 1752405934157.png
    1752405934157.png
    2.1 MB · Views: 19
Last edited:

Josh128

Golden Member
Oct 14, 2022
1,060
1,605
106
Intel isn’t the basis of good cpu design anymore. Their P core are horrible. Anyway, yes AMD has good engineers. 6GHz isn’t some mythical barrier that can’t be broken.
This is a super cope. Intel achieved 6.2 GHz off the shelf boost with their ultra tweaked for frequency 10nm process, ultra binned CPUs, then failed to do so on a far superior 3nm node. AMD failed to move the frequency bar from Zen 4 to Zen 5 despite a tweaked node. There are physical and electrical barriers to frequency increases at a certain point-- inductive, capacitive and transistor switching speed barriers. All these have to line up perfectly to reliably increase speeds. Things become exponentially more difficult as speed increases, its not a linear relation.
 

Det0x

Golden Member
Sep 11, 2014
1,455
4,948
136
Can you try using Geekbench ST?
GB6 locked with affinity to a single Z5 X3D core with SMT enabled
HWinfo cant keep up with such a bursty workload, so average power usage it out the window, but max draw was ~15.7w with 2 threads enabled on a single core accoring to hwinfo
1752408397153.png
1752408174970.png
 
Last edited:
  • Like
Reactions: Elfear

511

Platinum Member
Jul 12, 2024
2,928
2,920
106
GB6 locked with affinity to a single Z5 X3D core with SMT enabled
HWinfo cant keep up with such a bursty workload, so average power usage it out the window, but max draw was ~15.7w with 2 threads enabled on a single core accoring to hwinfo
View attachment 127078
View attachment 127077
Thanks so looks like 12-13W/Core not bad tbh
 
Jul 27, 2020
26,043
17,966
146
Yes I got these numbers from my bottom of my ass just like MLID did.
Be better than MLID, my dude!

You guys will never believe this BUT

I have a guy in my office who recently joined.

He is the SPLITTING image of MLID.

Someone told him that I'm some sort of guru.

So he was visiting my seat every day at like 1 or 2 PM but I arrive for work at 3 PM so thankfully I missed him.

He was very enthusiastic about learning stuff from me. But his job is totally different from mine. He's a project officer. Still, his persistence was pretty annoying. He finally gave up and now probably even soured by my reluctance to take him under my wing and teach him the ABC of stuff I do.

I imagine MLID is the same. Relentless bastard who stands outside the home of whoever sources he knows, looking like a pathetic weasel in the rain and they throw random tidbits at him just so they can see his stupid wagging tail disappear far off into the distance, fading as he hurries along.
 
Last edited:

Win2012R2

Senior member
Dec 5, 2024
981
980
96
the entire tech media do it for money gone were the time when it was passion for people
Doing honest work for money is ok, but when one spreads hoaxes (or lies) for money that become fr....

Some people datamine to find clues as to what will happen - this is fair play, but he claims to be using sources inside orgs to leak confidential info that can cause serious damage to companies, there is no public interest in knowing 18 months too early what frequency will be in a product that takes years to design. This isn't journalism.

It's not like he does it just a couple of months before release either, frankly I am amazed AMD did not take him to court - Sony did the right thing to slap him.

I guess he looks superior to that RGT guy who just spews total BS with a super annoying voice to boot.
 
Last edited: