Question Zen 6 Speculation Thread

Page 195 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Josh128

Golden Member
Oct 14, 2022
1,098
1,655
106
No? Zen 4 bumped up both dual-CCD chips to 230W PPT (170W TDP).
Ah thats right-- but I dont think the 12 SKU pulled that full 230W PPT. In any case, If you use the 7700X vs the 5800X, two 105W 8 core SKUs, in R15 you get:

R7 5800X: 2611 CB
R7 7700X: 3328 CB
-------------------------------
+27%

So it doesnt change the extrapolation that much. Still comes to be ~65%

1.3 x 1.27= 1.65
 

Josh128

Golden Member
Oct 14, 2022
1,098
1,655
106
This extrapolation is useless.
desktop ≠ server
GloFo_14nm→TSMC_N7 ≠ N7→N5 ≠ N3→N2
Cinebench ≠ Dr. Lisa Su's mysterybench

(Symbol "≠" standing as a shorthand for apples-to-oranges here.)
Well too bad. Thats my story and Im stickin to it!
 

fastandfurious6

Senior member
Jun 1, 2024
626
804
96
LLMs are a force multiplier for R&D and all knowledge work pushing boundaries

expect the zen6 generation jump to have significant improvements over most other jumps

but also other companies will with N2 will catch up
 

RnR_au

Platinum Member
Jun 6, 2021
2,551
5,958
136
LLMs are a force multiplier for R&D and all knowledge work pushing boundaries

expect the zen6 generation jump to have significant improvements over most other jumps

but also other companies will with N2 will catch up
Nah. Not yet. Zen 6 effort have been going for years. They would not have integrated LLM's deeply into their work if at all.

Maybe in the future this will happen with custom trained LLM's, but the cost penalty for letting 'AI slop' into production would make adoption of such tech be a measured and careful piece of work.
 

fastandfurious6

Senior member
Jun 1, 2024
626
804
96
there is no AI slop when used by real subject matter experts, it's just force multiplier

i.e. the slop is filtered out by human eye and the expert recognizes slop instantly

100% it's already used, even generic LLMs (but the big ones)
 
  • Like
Reactions: Joe NYC

RnR_au

Platinum Member
Jun 6, 2021
2,551
5,958
136
i.e. the slop is filtered out by human eye and the expert recognizes slop instantly
And your staff never get tired and always have 100% perfect concentration and focus with zero distractions 👍

Sorry, but I know from software development that even the big LLM's can give you very good looking code, but when you look closer it can be wrong in impressively subtle ways.

LLM's are still a stochastic process in the end. You need to have a smorgasbord of QA processes surrounding their output.
 

Timorous

Golden Member
Oct 27, 2008
1,969
3,850
136
LLM's are still a stochastic process in the end. You need to have a smorgasbord of QA processes surrounding their output.

Just add more AI agents to check the prior output. It is going to end up being AI all the way down.
 
  • Love
Reactions: RnR_au

fastandfurious6

Senior member
Jun 1, 2024
626
804
96
QA processes surrounding their output

I imagine every respectable company has strong tests, especially cutting edge CPU companies...

which tests are also easier to produce and re-verify with LLM

I insist LLM significantly accelerate all workloads but I know why you disagree and that's good 🤣
 
  • Like
Reactions: Joe NYC

MS_AT

Senior member
Jul 15, 2024
781
1,590
96
I imagine every respectable company has strong tests, especially cutting edge CPU companies...

which tests are also easier to produce and re-verify with LLM
Do you work for such a company and use that in production, so speak from experience? Or you are just creating a buzz on the forum because you are bored?;)
 
  • Like
Reactions: CouncilorIrissa

gdansk

Diamond Member
Feb 8, 2011
4,343
7,289
136
I imagine every respectable company has strong tests, especially cutting edge CPU companies...

which tests are also easier to produce and re-verify with LLM

I insist LLM significantly accelerate all workloads but I know why you disagree and that's good 🤣
You imagine a lot. There are a lot of papers about using language models in EDA, mainly to generate Verilog and VHDL. And of course, many of the authors are honest enough to list the challenges which you dismiss briskly.

But it's hard to guess if they're using any of this with Zen 6.
 

fastandfurious6

Senior member
Jun 1, 2024
626
804
96
You imagine a lot. There are a lot of papers about using language models in EDA, mainly to generate Verilog and VHDL. And of course, many of the authors are honest enough to list the challenges which you dismiss briskly.

But it's hard to guess if they're using any of this with Zen 6.

everyone uses them........... what's being told public/official/academia is a different story because "rigidity" or "PR" or whatever, has different outlook


ML used normally long before LLMs of course but ML is work, LLM is free work lol
 

Krteq

Golden Member
May 22, 2015
1,008
721
136
Where did you get 40% PPC for AVX-512 workloads from? Was there mentioned somewhere officially? Or are you deriving that from your own workloads?
Only AVX-512 benches I found:

Phoronix said:
If taking the geometric mean of all the raw AVX-512 performance benchmark results, the Core i9 11900K improved by 31% with AVX-512 enabled. The Ryzen 7 7700X meanwhile saw its performance improve by 44% with AVX-512 enabled.

Phoronix said:
When taking the geometric mean of all the AVX-512 workloads tested on this EPYC 9655(P) Supermicro server, AVX-512 yielded 1.57x the performance of the same hardware/software but with AVX-512 forced off.

Phoronix said:
On average for the tested AVX-512 workloads, making use of the AVX-512 instructions led to around 59% higher performance compared to when artificially limiting the Ryzen 9 7950X to AVX2 / no-AVX512.
 

Makaveli

Diamond Member
Feb 8, 2002
4,967
1,561
136
^^^

Hopefully AMD is smart and only bothers with VCache models for DIY. Gonna need top tier gaming performance to justify the prices.
I don't think so the DIY market while very vocal is not that big.

The OEM market for dell, HP etc(Laptops,desktops, workstations) etc takes priority over DIY.
 

Io Magnesso

Senior member
Jun 12, 2025
583
164
71
OEMs are probably only buying the "low end" models.
If you compare it in the DIY community, it's shut up in a small place... It's surprisingly interesting to think that it's causing the echo chamber phenomenon
It's true that the DIY market is also important, but it's made by the manufacturer.
It's not that manufacturer PCs and BTO PCs don't use V-Cache models.
My home country's BTO PC maker is a PC for gamers and often uses the V-Cache model.
Of course, there are also regular models.