Question Zen 6 Speculation Thread

Page 338 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

maddie

Diamond Member
Jul 18, 2010
5,197
5,596
136
Consumers are F'ed until AI bubble bursts anyways. How many people are going to buy any PC hardware with ram prices being 10x normal?
Consumers are screwed in some fashion whether it pops or not. AI and related CAPEX is whats driving the macro economy.
 

adroc_thurston

Diamond Member
Jul 2, 2023
8,113
10,871
106
I wonder how much of a change Mi500 is going to be over Mi400
man, how do I even put it?
Will AMD get to lap NVidia?
yeah that's the thing with AMD, sooner or later they're just gonna devour you alive.
They're driving the stock market, but they're a parasite leeching life out of the real economy. Instead of those resources being used to improve life for people, they're being poured into a bottomless money pit.
Nah you see, LLMs aren't useless per se; they just don't have the business model so far to sustain all that capex.
 
  • Like
Reactions: madtronik

NTMBK

Lifer
Nov 14, 2011
10,513
5,995
136
Nah you see, LLMs aren't useless per se; they just don't have the business model so far to sustain all that capex.
The actual underlying computer science has some limited utility. These white elephant datacentres with thousands of rapidly aging GPUs will never, ever be worth the effort and money wasted on them.
 
  • Like
Reactions: lopri

Joe NYC

Diamond Member
Jun 26, 2021
4,061
5,600
136
man, how do I even put it?

Oh boy:

2026 Rubin N3P
2026 Mi400 N2P
2027 Rubin Ultra N3P
2027 Mi500 N2P

Lisa reminds me of Viktor Tiknonov, the coach of Russian Army team (CSKA Moscow) and the Russian National team in 1970s-1980s. Relentless, like the Terminator.

Which is why the Miracle on Ice in 1980 (defeating Russia) was really a miracle.
 

maddie

Diamond Member
Jul 18, 2010
5,197
5,596
136
They're driving the stock market, but they're a parasite leeching life out of the real economy. Instead of those resources being used to improve life for people, they're being poured into a bottomless money pit.
That was my prediction a few weeks ago. They will drain the life from profitable businesses to feed a capital destroying monster.

The actual underlying computer science has some limited utility. These white elephant datacentres with thousands of rapidly aging GPUs will never, ever be worth the effort and money wasted on them.
Reminds me of the histories of the early jet age. The tech was advancing so quickly that aircraft were seen as obsolete on entering service. I wonder what is the depreciation interval for the investments.
 
  • Like
Reactions: r.p and lightmanek

dr1337

Senior member
May 25, 2020
538
824
136
The actual underlying computer science has some limited utility. These white elephant datacentres with thousands of rapidly aging GPUs will never, ever be worth the effort and money wasted on them.
Worth the effort? highly debatable, its the same effort that goes into consumer products if not less.

Assuming the industry doesn't completely crater, all the old(current) stuff should wind up on the used markets in 5 or so years. And then we can all run AI at home instead of on the cloud.

That was my prediction a few weeks ago. They will drain the life from profitable businesses to feed a capital destroying monster.
This is the main concern, I work in integration and commercial solutions are drying up real fast. Its definitely a push by SaaS providers to cut off the rest of the industry.

They don't want independent AIs or local systems. OpenAI is pretty evil about it too, they always release a larger and less censored model initially, and then cull it back after a few weeks once they know the weaknesses and trick consumers into thinking they're getting one product, but instead its another.

Can't rug-pull people if we all have the models downloaded locally.

I do wonder about the circular economics. The AI companies buying hardware at higher than normal margins combined with the hardware companies buying shares in the AI companies just seems like a ponzi scheme. If the banks ever stop giving the AI companies seed money, the whole thing is just a tower of cards. (another reason why they want to prevent local AI, to force control of consumers money)
 

Joe NYC

Diamond Member
Jun 26, 2021
4,061
5,600
136

Good catch by C&C: Venice-X lives.

BTW, that was my prediction, that AMD would bring back V-Cache to server, even if Turin-X was skipped.

But I think more likely, this is going to be using the same CCD (with V-Cache) as desktop, using full Zen 6 cores, rather than dense cores.

So then, Venice-X would have (12 * 4 MB L3 + 12 * 8 MB V-Cache) * 8 chiplets =
(48 MB + 96 MB) * 8 chiplets = 144 MB * 8 chiplets = 1,152 MB
 
  • Like
Reactions: lightmanek

Tarkin77

Member
Mar 10, 2018
96
202
106
Lisa says Mi500 will be 10x of Mi400.

is it 10x?

i really don't know how to interpret this:

AMD shared additional details at CES on the next-generation AMD Instinct MI500 GPUs, planned to launch in 2027. The MI500 Series is on track to deliver up to a 1,000x increase in AI performance compared to the AMD Instinct MI300X GPUs introduced in 20231


"Based on engineering projections by AMD Performance Labs in December 2025, to estimate the peak theoretical precision performance of AMD Instinct™ MI500 Series GPU powered AI Rack vs. an AMD Instinct MI300X platform. Results subject to change when products are released in market."

is 1000x Helios vs single "platform" of 8 mi300x??? Is 10x vs. mi400 single gpu or "helios" rack vs mi500 "titan" rack (with 3.5x number of gpus)

????
 
Last edited:
  • Like
Reactions: lightmanek

Joe NYC

Diamond Member
Jun 26, 2021
4,061
5,600
136
is it 10x?

i really don't know how to interpret this:

AMD shared additional details at CES on the next-generation AMD Instinct MI500 GPUs, planned to launch in 2027. The MI500 Series is on track to deliver up to a 1,000x increase in AI performance compared to the AMD Instinct MI300X GPUs introduced in 20231


"Based on engineering projections by AMD Performance Labs in December 2025, to estimate the peak theoretical precision performance of AMD Instinct™ MI500 Series GPU powered AI Rack vs. an AMD Instinct MI300X platform. Results subject to change when products are released in market."

is 1000x Helios vs single "platform" of 8 mi300x??? Is 10x vs. mi400 single gpu or "helios" rack vs mi500 "titan" rack (with 3.5x number of gpus)

????

Maybe, using some measure, AMD concluded that:

Mi355 = 10x Mi300
Mi400 = 10x Mi355
Mi500 = 10x Mi400
 

basix

Senior member
Oct 4, 2024
291
589
96
Maybe, using some measure, AMD concluded that:

Mi355 = 10x Mi300
Mi400 = 10x Mi355
Mi500 = 10x Mi400
I think it is rather simple: 8x GPU MI300X cluster vs. Full MI500 rack
MI300X delivers 1.3 PFLOPS of FP16 (matrix calculations). A MI300X platform is a 8x GPU cluster, so this results in 10.4 PFLOPS.

MI455X will bring FP4 support and at Helios rack level this results in 3 ExaFLOPS. Now double the amount of GPUs per rack for MI500, make the GPUs itself 1.75x faster (plausible if e.g. increasing GPU size by using 3x or 4x base Die tiles instead of 2x) and we land at 10.5 ExaFLOPS per rack.

10.5 ExaFLOPS / 10.4 PFLOPS = 1000x ;)
 
Last edited:

Joe NYC

Diamond Member
Jun 26, 2021
4,061
5,600
136
Any guesses about the Venice-X?
1) 12-core CCD + V-Cache
2) 32-core CCD + V-Cache
3) something entirely different, such as Zen 6 cores on Mi430 GPU, with base die area under the cores serving as V-Cache

1767834088154.png
 
  • Like
Reactions: lightmanek

Win2012R2

Golden Member
Dec 5, 2024
1,281
1,327
96
I don’t think the 32 core CCD has 3D TSVs.
As much as I like the idea you are most likely right, from what I could see AMD sold a lot of Milan-X to companies that got high per core licensing costs and 3D cache was big speed up for the software that was used (like EDA), so those favour fast clocked cores + lots of cache, where as max core SKUs are for hyperscalers that run independent workloads on them.
 

Win2012R2

Golden Member
Dec 5, 2024
1,281
1,327
96
Ok, it's 2026 - when should we expect to see first perf leaks?

If intro is in 6 months, then perhaps Mar-Apr for 60%+ videos from MLID?
 

Win2012R2

Golden Member
Dec 5, 2024
1,281
1,327
96
Those you can model off Venice perf numbers.
I don't think I'll be able to afford those server chips, not with memory prices as they are, so this time thinking about personal Zen6 upgrade.

Once IPC will be known for Venice chips I doubt they'll be able to hide much longer.