Question Zen 6 Speculation Thread

Page 238 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Thunder 57

Diamond Member
Aug 19, 2007
4,024
6,740
136
The easiest is LM Studio.

If you have at least a 5800X with 32GB DDR4-3200 (though 3600 or 3733 would be best), try it with the model gpt-oss-20b. May work with 16GB RAM but you will get very close to running out of RAM. It could work fine with slower CPUs but the wait time will be harder to deal with.

Ask it questions you are curious about. Give it a text or PDF document and ask questions about that. You will be surprised how useful it can be.

Interesting. 5700X (though I run it at +100 max boost compared to a 5800X) and 32GB 3600 MT/s. I figured It'd want a high end video card which I do not have. I haven't really dabbled in AI other than Stable Diffusion and it is something I should try to keep up with.
 
Jul 27, 2020
27,972
19,112
146
If you have a supported card, it will let you offload some layers to it, further improving speed.

It also exposes PBO/EXPO instability really well.
 

yuri69

Senior member
Jul 16, 2013
677
1,215
136
People honestly do not care about AI workloads. If anything not many welcome the technology.
In my social bubble people actually do care.

Regular consumers use free ChatGPT or Copilot websites on nearly daily basis for job-related stuff like "writing that stupid email" and for casual stuff "generating a funny image out of my vacation photo".

At work some less technical people use Copilot stuff even within M$ products like Excel or Outlook to generate formulas or query stuff. Technical users use AI in various ways - ranging form website, through coding-assistants, to custom MCP servers.

The need for locally ran AI models is obviously there - latency, data locality - but any easily used products doesn't seem to be here. Is it lack of standards and/or performance?
 

StefanR5R

Elite Member
Dec 10, 2016
6,670
10,550
136
Give it a text or PDF document and ask questions about that. You will be surprised how useful it can be.
Sounds like Required Reading at school. Some students actually read it. The other students just asked the former ones questions about it.

Attention span, focus, reading comprehension and related abilities are good to have if one ever happens to need them.

Dr. Lisa Su, Chair and CEO of AMD, to Keynote CES 2026 on how AI is Changing the World
For the better, only for the better of course.

Edit,
Why is AMD Launching MI400 it's CES
I don't think AMD is launching Mi400 at CES.
What AMD needs is to remind investors at every opportunity that there are other potential players than Nvidia. Somehow, Consumer Technology Association gave them another such opportunity at CES 2026. — Edit 2, to tie that back into Zen 6 speculation: In other words, my guess is that what the press release says is pretty much exactly what kind of information will be given by way of this keynote, hence, nothing substantial about Zen 6 (the architecture and the generation of products).
 
Last edited:
Jul 27, 2020
27,972
19,112
146
Attention span, focus, reading comprehension and related abilities are good to have if one ever happens to need them.
Not everyone's English comprehension is great especially those using it as a second language. Plus, an LLM can summarize the stuff in a neat way and print it in the form of a table, for example. Gotta use whatever advantage you have these days. It's a cruel, unsympathetic and cutthroat world out there.
 
Jul 27, 2020
27,972
19,112
146
The need for locally ran AI models is obviously there - latency, data locality - but any easily used products doesn't seem to be here. Is it lack of standards and/or performance?
Probably both.

Performance is possible only with GPUs (32GB and above) and standards, well, I suspect developers really don't want to target CUDA and leave everyone else out. But then they also don't want to support multiple GPU compute platforms.

M$ could have fixed this problem but they got a serious obsession for useless NPUs.
 

Hail The Brain Slug

Diamond Member
Oct 10, 2005
3,882
3,311
146
Not everyone's English comprehension is great especially those using it as a second language. Plus, an LLM can summarize the stuff in a neat way and print it in the form of a table, for example. Gotta use whatever advantage you have these days. It's a cruel, unsympathetic and cutthroat world out there.
This is a take that focuses on the immediate term and ignores the very real long term degradation of one's own skills by offloading these tasks to an AI.

I won't do it. I have enough problems, I dont need to give myself more.
 
  • Like
Reactions: Saylick

ToTTenTranz

Senior member
Feb 4, 2021
686
1,147
136
Interesting. 5700X (though I run it at +100 max boost compared to a 5800X) and 32GB 3600 MT/s. I figured It'd want a high end video card which I do not have. I haven't really dabbled in AI other than Stable Diffusion and it is something I should try to keep up with.
There are also ways to spread a Mixture-of-Experts model among GPU and CPU, where the GPU+VRAM only contains and processes the central logic and then the CPU+RAM handle the experts.

People with lower end GPUs are getting really good performance results from that approach.


Also, Nvidia's latest PostNAS workflows are resulting in >6x speedups even on older architectures like Ampere. Look up Jet-Nemotron.



The only problem is most of these approaches are still very early and only available for people willing to navigate throughout Linux commands and heavy python scripting. It shouldn't take too long for someone to make them available for everyone on a windows / mac executable.
 
Jul 27, 2020
27,972
19,112
146
This is a take that focuses on the immediate term and ignores the very real long term degradation of one's own skills by offloading these tasks to an AI.
I suppose it depends on your philosophy and work ethic. Some of us are lazy and aren't going to be changing our ways so AI for us is more like, oh nice, more laziness!

And yes I know and have SEEN people who will torture themselves through figuring something out no matter how long it takes them because they are fully dedicated to learning. Sorry, I know that's the right way to do it but it's not for me. Life is just too short.
 
Jul 27, 2020
27,972
19,112
146
You're also offloading stuff to a very unreliable system.
Just don't.
Maybe you should use it and help improve it for everyone else rather than impede progress by telling others to not even experience it? It's not like if it says "chickens have green blood", people will start believing that. If someone is that clueless, they are going to be doing and believing stupid stuff without AI anyway.
 
Jul 27, 2020
27,972
19,112
146
Also, Nvidia's latest PostNAS workflows are resulting in >6x speedups even on older architectures like Ampere. Look up Jet-Nemotron.
That's the fascinating thing. New techniques are being discovered all the time. Wish I could get a peek into the future and see how much of an alien race we become :D

Seriously, once we figure everything out, we won't even need these fleshy decaying bodies to "live". We could exist, essentially until the heat death of the Universe.
 
Jul 27, 2020
27,972
19,112
146
what's there to even discuss.
Will Zen 6 SMT be even more effective?

Will Zen 6 finally be able to make the dual decode clusters work for a single thread?

Will Zen 6 improve the cycle latency for all AVX-512 instructions and finally achieve parity with Intel's instruction cycle times?

Will Zen 6 make 10,000 MT/s RAM speed possible and attainable by mere mortals?

Will Zen 6 at least try to go for 8000 1:1 RAM speeds?

What kind of improvement can we expect from the Zen 6 FCLK clocks?

Will Zen 6 finally be the architecture that Intel is not able to beat in any benchmark (been waiting SO LONG for that!) ?

Is Zen 6 going to be the first Ryzen consumer CPU that sees almost no benefit from delidding?

Will Zen 6 finally usher in the era of dual V-cache CCDs, for the love of all that is holy and sacred?

Will Zen 6 have an NPU?

Will the iGPU of Zen 6 be so powerful that it can obviate the need for an NPU and it actually masquerades as an NPU when needed so M$ can back the F off?

Will Zen 6 be the first Ryzen that beats Intel CPUs in idle power usage?
 
  • Like
Reactions: Tlh97

adroc_thurston

Diamond Member
Jul 2, 2023
7,070
9,815
106
Will Zen 6 SMT be even more effective?
No.
Will Zen 6 finally be able to make the dual decode clusters work for a single thread?
They already do.
Will Zen 6 improve the cycle latency for all AVX-512 instructions and finally achieve parity with Intel's instruction cycle times?
No idea.
Ask Mysticial.
Will Zen 6 make 10,000 MT/s RAM speed possible and attainable by mere mortals?
Yes.
Will Zen 6 at least try to go for 8000 1:1 RAM speeds?
idk.
What kind of improvement can we expect from the Zen 6 FCLK clocks?
moar.
Will Zen 6 finally be the architecture that Intel is not able to beat in any benchmark (been waiting SO LONG for that!) ?
idk.
Is Zen 6 going to be the first Ryzen consumer CPU that sees almost no benefit from delidding?
Well the IHS is the same thiccass slab as anything else AM5, so...
Will Zen 6 finally usher in the era of dual V-cache CCDs, for the love of all that is holy and sacred?
pipe down.
Will Zen 6 have an NPU?
Discrete GEMM blobs are on their way out, courtesy of our Lord and Savior gfx13.
Will the iGPU of Zen 6 be so powerful that it can obviate the need for an NPU and it actually masquerades as an NPU when needed so M$ can back the F off?
Well for gfx13 parts yes.
Will Zen 6 be the first Ryzen that beats Intel CPUs in idle power usage?
on DT? maybe.
 

MS_AT

Senior member
Jul 15, 2024
868
1,760
96
They already do.
If you mean Zen5, then Software optimization guide and C&C disagree with this statement. Unless for some reason, 4 inst/C in ST mode is achieved using both decoders and throughput limitation comes from elsewhere.
Ask Mysticial.
So does he already have a sample?;)
Will Zen 6 improve the cycle latency for all AVX-512 instructions and finally achieve parity with Intel's instruction cycle times?
You mean they should slow down some of them? I mean AMD to match Intel?;)
 
  • Like
Reactions: Tlh97

adroc_thurston

Diamond Member
Jul 2, 2023
7,070
9,815
106
If you mean Zen5, then Software optimization guide and C&C disagree with this statement
Turns out it's not true.
Unless for some reason, 4 inst/C in ST mode is achieved using both decoders and throughput limitation comes from elsewhere.
yea. I hate icaches.
long live icaches.
So does he already have a sample?;)
No, but you'll get to enjoy his ramblings.
You mean they should slow down some of them
He means 2clk FADD regression.
 

MS_AT

Senior member
Jul 15, 2024
868
1,760
96
yea. I hate icaches.
long live icaches.
Do you know if there is a write-up somewhere. I thought the dual fetch on ICache was to help with that, curious if I read it wrong or they hit a funny problem somewhere.
No, but you'll get to enjoy his ramblings.
Looking forward to.
He means 2clk FADD regression.
I guess min 2clk latency for otherwise 1 clk instruction from the "scheduler full problem". 2CLK FADD always would be a nice boost, now it's 2CLK only on happy path, 3CLK otherwise but this is not affected by the "scheduler full problem".