Discussion Zen 5 Speculation (EPYC Turin and Strix Point/Granite Ridge - Ryzen 9000)

Page 105 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Joe NYC

Diamond Member
Jun 26, 2021
3,650
5,189
136
H is heterogeneous aka APUs only.
Oh, so that's what the H stands for. Good to know.

As far as the heterogeneous, MLID floated an idea that AMD may be adding some AI chiplets to Turin at some point. If that happens, there is going to be a whole lot of heterogenousity (or is it heterogeneousness ?) even in SP5 sockets.
 
Last edited:

Frenetic Pony

Senior member
May 1, 2012
218
179
116
That's just tip of the iceberg.
I'd imagine they're busy trying to maximize AI inference chips in any way they can. The Xilinx stuff is all inference and between self driving cars and the sudden explosion in generative AI inference, especially in the cloud, and just how much it costs to generate each query (especially in the cloud stuff) there's a huge market for high efficiency inference.

AMD offering a kind of inference heavy server chip somewhere on the spectrum of a traditional CPU socket and a MI300 type offering seems a natural direction to go.
 

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
there was rumor once the xilinx purchase was approved by all major countries that zen 6 might get some ai tech and it was hinted at by amd a couple years ago and even last year by lisa during the zen 4 announcement. it's all speculos on our part and anything we discuss is worth nothing as all of that would have been designed into place long before any of our fartings on the topic were summed up in a digital town square speech like in this thread or the zen 6 thread if there is one because I have never seen it myself but it doesn't exist one of you should make it.
 

Thibsie

Golden Member
Apr 25, 2017
1,127
1,334
136
there was rumor once the xilinx purchase was approved by all major countries that zen 6 might get some ai tech and it was hinted at by amd a couple years ago and even last year by lisa during the zen 4 announcement. it's all speculos on our part and anything we discuss is worth nothing as all of that would have been designed into place long before any of our fartings on the topic were summed up in a digital town square speech like in this thread or the zen 6 thread if there is one because I have never seen it myself but it doesn't exist one of you should make it.
Not a problem, those tech were license by Xilinx to AMD long before the deal.
 

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
I suspect this means instructions for bf16 and matrix handling.

AMD has typically lagged far behind ARM for CPU µArch design in this arena.
The last time z5 ai inferencing and optimisations came up this was what was said. I don't expect any dedicated tile for ai work on consumer ryzen until zen 6 or 7. bloody bizarre to even type that out as I'd laughed at the idea of an amd comeback even on the even of zen's launch.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,211
583
126
I suspect this means instructions for bf16 and matrix handling.

AMD has typically lagged far behind ARM for CPU µArch design in this arena.

I wonder how much that will improve the performance for typical machine learning tasks. Will it be a viable option compared to using GPUs?
 

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
Not a problem, those tech were license by Xilinx to AMD long before the deal.
of course, amd was very open with their working together long before the deal was announced. xilinx holds onto a lot more ip than amd has ever used. interesting enough in their disclosure reports mid acquisition xilinx was approached by another paarty seeking to buy them out. never was said who it was and we can only wildly guess.
 

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
I wonder how much that will improve the performance for typical machine learning tasks. Will it be a viable option compared to using GPUs?
gpu will always be more efficient for the power used than a cpu. unless x86 can do the work of a gpu under an apple m processor 100% load power draw.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,211
583
126
gpu will always be more efficient for the power used than a cpu. unless x86 can do the work of a gpu under an apple m processor 100% load power draw.

So I guess the question then is why add this to the CPU at all, if everyone doing serious machine learning tasks will use GPU anyway?
 

soresu

Diamond Member
Dec 19, 2014
4,105
3,566
136
I wonder how much that will improve the performance for typical machine learning tasks. Will it be a viable option compared to using GPUs?
As with all things I suppose it depends on the ML frameworks used and their optimisations.

There's no reason that both can't be used in tandem - in fact that is ARM's "Total Compute" focus for the last couple of years, and Qualcomm have similarly touted a combined AI compute spec in recent years including the CPU, Adreno and Hexagon DSP together.
 

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
So I guess the question then is why add this to the CPU at all, if everyone doing serious machine learning tasks will use GPU anyway?
The assumption you have is everyone will have a top of the line consumer gpu or quadro. that isn't the case. the "engines" on processor can be flexed to be resourceful for speeding up or improving general software. it's a chicken and egg problem if you ask me.
 

soresu

Diamond Member
Dec 19, 2014
4,105
3,566
136
gpu will always be more efficient for the power used than a cpu
Not always.

GPUs are great for certain tasks, and terrible at others.

A many core CPU will always be better than a GPU general compute at video encoding, especially as complexity and interframe dependencies increase with each new codec generation.

I believe data heavy processing favor GPUs and task/decision heavy (uber branching) processing favors CPUs.
 
  • Like
Reactions: Schmide

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
Not always.

GPUs are great for certain tasks, and terrible at others.

A many core CPU will always be better than a GPU general compute at video encoding, especially as complexity and interframe dependencies increase with each new codec generation.
You're mixing up software encoding with hardware encoding here. if you use the igpu on intel or amd to encode video it's hardware encoding and the quality will not be there. We're at the point were brute forcing software encoding is very fast, at leas tcompared to a mere 5 years ago.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,211
583
126
The assumption you have is everyone will have a top of the line consumer gpu or quadro. that isn't the case. the "engines" on processor can be flexed to be resourceful for speeding up or improving general software. it's a chicken and egg problem if you ask me.

Well, I'm thinking there's not that many desktop PC users doing machine learning training tasks as a "hobby project". And those that do it professionally will use GPUs anyway.

But regardless it will be interesting to see how an AMD 8950X Zen 5 CPU with these AI extensions will improve ML task performance compared to 7950X. Will it e.g. be 2x or 5x faster for such operations?

And how will 8950X perform against e.g. an RTX4080/4090 GPU. Will the GPU be e.g. 2x, 5x, or 10x faster?
 

CakeMonster

Golden Member
Nov 22, 2012
1,630
809
136
I fear people might be disappointed, whatever 'AI' instructions are coming in Z5 was probably planned years ago before the current hype and is probably some really simple stuff. I'm sure it can be useful, but I doubt its revolutionary, that will probably come in later generations.
 

soresu

Diamond Member
Dec 19, 2014
4,105
3,566
136
You're mixing up software encoding with hardware encoding here
No I'm not.

I specifically said general compute - as in using the shaders instead of an ASIC.

An ASIC can be more efficient for power consumption, but it's utility is minimal as a fixed function hardware block.

ASIC encoding quality is also generally not up to snuff with software encoding in terms of quality tuning for at least the first 5+ years of codec availability, and often beyond.
 

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
Well, I'm thinking there's not that many desktop PC users doing machine learning training tasks as a "hobby project". And those that do it professionally will use GPUs anyway.

But regardless it will be interesting to see how an AMD 8950X Zen 5 CPU with these AI extensions will improve ML task performance compared to 7950X. Will it e.g. be 2x or 5x faster for such operations?

And how will 8950X perform against e.g. an RTX4080/4090 GPU. Will the GPU be e.g. 2x, 5x, or 10x faster?
Quadro isn't cheap.

your other questions... why not ask me when humans will grow wings. easier to answer.
 

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
No I'm not.

I specifically said general compute - as in using the shaders instead of an ASIC.

An ASIC can be more efficient for power consumption, but it's utility is minimal as a fixed function hardware block.

ASIC encoding quality is also generally not up to snuff with software encoding in terms of quality tuning for at least the first 5+ years of codec availability, and often beyond.
If you want the most power efficient video encoder you buy that AMD/Xilix hardware. 1500 bones gets you something more efficient than the two. I'll send you my xmas gift list and you lot can take match sticks and see who ends up buying that for me.