Discussion Zen 5 Speculation (EPYC Turin and Strix Point/Granite Ridge - Ryzen 9000)

Page 231 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Fjodor2001

Diamond Member
Feb 6, 2010
4,208
583
126
1T is what everyone will benefit from.
There’s no way for AMD to increase MT perf enough by only improving ST perf and not adding more cores.

If they go from 16C->32C instead, then MT perf may increase 50-100% for MT heavy workloads. To achieve similar with only improving ST perf, the ST perf would have to increase 50-100%, which there is no chance AMD will be able to even come close to.
 

adroc_thurston

Diamond Member
Jul 2, 2023
7,073
9,817
106
There’s no way for AMD to increase MT perf enough by only improving ST perf and not adding more cores.
I mean that's how the 96c Turin works.
You also crank 25% more watts for posterity.

Try guessing 96c Turin socket perf (SIR, but you can imagine weighted workload average too).
 

gdansk

Diamond Member
Feb 8, 2011
4,568
7,681
136
There’s no way for AMD to increase MT perf enough by only improving ST perf and not adding more cores.

If they go from 16C->32C instead, then MT perf may increase 50-100% for MT heavy workloads. To achieve similar with only improving ST perf, the ST perf would have to increase 50-100%, which there is no chance AMD will be able to even come close to.
16C is more than enough. If it isn't then buy Epyc or Threadripper because the workload is lovely. 50-100% MT isn't happening without increasing the core count. But what do they need that for? They already increased the core count well beyond 32.

This isn't the 4 core or 8 core era. We have enough MT now for 20% improvements to be sufficient. And further increases are number wanking with marginal utility in the desktop. If you're running a bunch of VMs or an embarrassing parallel workload then buy the part designed for that.
 
Last edited:

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
27,237
16,106
136
16C is more than enough. If it isn't then buy Epyc or Threadripper because the workload is lovely. 50-100% MT isn't happening without increasing the core count. But what do they need that for? They already increased the core count well beyond 32.

This isn't the 4 core or 8 core era. We have enough MT now for 20% improvements to be sufficient. And further increases are number wanking with marginal utility in the desktop. If you're running a bunch of VMs or an embarrassing parallel workload then buy the part designed for that.
Exactly. I keep getting crap about "Why do you need sound on your Genoas ?, Its a server". I don't use them as servers, I need more cores.
 
  • Like
Reactions: lightmanek

Fjodor2001

Diamond Member
Feb 6, 2010
4,208
583
126
16C is more than enough. If it isn't then buy Epyc or Threadripper because the workload is lovely.
Same mantra Intel was pushing before we got Zen. 4C is more than enough for all desktop PCs they said. If you want more, you’ll have to buy our expensive HEDT or server CPUs. Then Zen entered the market and bumped it first to 8C and then to 16C on desktop PCs.

Now it’s AMD that has stagnated, and say 16C is all you need on desktop PCs. The irony…
 

poke01

Diamond Member
Mar 8, 2022
4,197
5,543
106
It seems AMD will have a lead on Intel for a few good months on desktop then.
 

adroc_thurston

Diamond Member
Jul 2, 2023
7,073
9,817
106

adroc_thurston

Diamond Member
Jul 2, 2023
7,073
9,817
106
It was a buggy release and Intel seems to be the only CPU designer where they go backwards in features or IPC
The funny bit is MTL being 3Q delayed, from Q1'23 to Q4'24.
Other delayed parts like ADL launched just fine, I've no idea what went wrong with MTL.
If wasn't for AMD, x86 would have been "dead man walking" type scene.
eh still the incumbent but yes, not competitive on raw technicalities.
which brings us to LNC...
 

JustViewing

Senior member
Aug 17, 2022
269
473
106
I was gunning for 24/32 core Zen5. The +30% IPC 16 Core APU at 999$ doesn't really excites me. However, last few weeks I was playing around with Stable Diffusion and other locally hosted LLMs and was really impressed with them. Now I would be happy if they introduce some sort of AI co processor insteed of extra 16 cores, which can increase these AI workload performance like +10x. This would have a real world performance impact.

Before somebody say GPU, I think specialized AI coprocessor will be more energy efficient than a graphics oriented GPU.

A 16Core Zen5 + AI coprocessor with 10x performance of 16cores, now that would excite me.
 

JustViewing

Senior member
Aug 17, 2022
269
473
106
GPUs haven't been graphics oriented since early 2010 and early 2012 for NV/AMD respectively.
Graphics Processing Unit, is optimized for processing graphics workload. It doesn't mean, GPU can't process AI workloads. GPU won't be efficient as purpose built AI co processor. If AMD can introduce dedicated AI core instead of additional 16core CCD, that would be a huge game changer.

Finally we will be able run "HER" locally.;)
 

JustViewing

Senior member
Aug 17, 2022
269
473
106
It doesn't do anything.
Why waste silicon?
Don't generalize your expectation with others. For me it will be huge and will have great benefit.

No it's optimized for running math.
In a SIMD (SIMT really) fashion.
You know, there is much more to CPU design than just the execution units? Like cache, registers, numerical precision, data flow paths, Instruction set etc... Each of them can be tuned for various workflow and AMD/NVidia/Intel spend huge amout of time optimizing them. So, optimized design graphics workload will not be optimal for AI workload.

Besides, why are you so against AI core processor? As i said earlier, if you are happy with 30% IPC increase, then good for you. For me it is not.
 

adroc_thurston

Diamond Member
Jul 2, 2023
7,073
9,817
106
  • Like
Reactions: igor_kavinski

Abwx

Lifer
Apr 2, 2011
11,884
4,873
136
Same mantra Intel was pushing before we got Zen. 4C is more than enough for all desktop PCs they said. If you want more, you’ll have to buy our expensive HEDT or server CPUs. Then Zen entered the market and bumped it first to 8C and then to 16C on desktop PCs.

Now it’s AMD that has stagnated, and say 16C is all you need on desktop PCs. The irony…

You re not looking with the good perspective, they are not stagnating at all, they bumped the core count by 2 in 2017 to 8C and then again in 2019 with 16C, this latter evolution wasnt needed that early, it would had made sense circa 2022 when they released Zen 4, and still, barely since 16C SKUs are marginal in their sales to this day.

Otherwise they just used the opportunity of their chiplets design to relase a halo product, but from their own words they didnt expect the 3950X to be as successfull at it was, certainly that the lack of competition made things easier since they captured 100% of the high throughput PC market.
 

gdansk

Diamond Member
Feb 8, 2011
4,568
7,681
136
Same mantra Intel was pushing before we got Zen. 4C is more than enough for all desktop PCs they said. If you want more, you’ll have to buy our expensive HEDT or server CPUs. Then Zen entered the market and bumped it first to 8C and then to 16C on desktop PCs.

Now it’s AMD that has stagnated, and say 16C is all you need on desktop PCs. The irony…
Intel until late 2017 had only quad cores outside of HEDT. Smartphones already had 6-8 cores for years. Show me a smartphone with more cores than a 3950X. It still doesn't exist, even 4.5 years later. The A12X launched 18 months after the 7700K and had better MT performance and twice the core count. Show me an iPad that has higher MT performance than 3950X. There is none, even 4.5 years later.

But there are smartphones with as good or better 1T than the 7950X. So what do you think AMD needs to focus on?
 
Last edited:

Abwx

Lifer
Apr 2, 2011
11,884
4,873
136
So apparently the market wanted 16C already in 2019 after all.

Now that was more than 4 years ago, and we’re long overdue for another core count increase.

A tiny part of the market, look at their current sales, Germany is a high income country where there s people that could eventually buy such chips, yet most of their current AM5 sales concentrate on the 7800X3D, the 7950X3D sell in a 0.15-0.2x ratio comparatively and the 7950X is even below the latter.

They also have good sales of AM4 CPUs of all sort but the 16C is marginal, most sold are the 5800X3D, by far, and 5700X/5800x/5600/5600G.
 
  • Like
Reactions: Tlh97 and Joe NYC

Joe NYC

Diamond Member
Jun 26, 2021
3,634
5,174
136
Don't generalize your expectation with others. For me it will be huge and will have great benefit.


You know, there is much more to CPU design than just the execution units? Like cache, registers, numerical precision, data flow paths, Instruction set etc... Each of them can be tuned for various workflow and AMD/NVidia/Intel spend huge amout of time optimizing them. So, optimized design graphics workload will not be optimal for AI workload.

GPUs and dedicated Neural or AI unit can perform the same tasks. What makes the dedicated parts more optimal is being more power efficient. Which is a thing in notebooks.

But it seems you are talking about desktop, where using GPU for the same task is not a problem. Some memory inefficiency is not a big problem, considering how much power (to perform AI tasks) you get from the desktop GPU.

If you are thinking about the dinky NPU units Intel and AMD have added to their notebooks, they are a fraction of capability of a desktop GPU. There is not really a strong reason to have an NPU + GPU in the same desktop system. It's just duplication and waste.

Workstation GPUs are another alternative, but probably more expensive per unit of performance than gaming GPUs.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,208
583
126
Show me an iPad that has higher MT performance than 3950X.
They are different types of devices, with different types of workload. Nobody is using their mobile phone or iPad to execute the type of high throughput MT workloads that some execute on desktop PCs. Hence the higher need for more MT perf on desktop PCs.
 

gdansk

Diamond Member
Feb 8, 2011
4,568
7,681
136
They are different types of devices, with different types of workload. Nobody is using their mobile phone or iPad to execute the type of high throughput MT workloads that some execute on desktop PCs. Hence the higher need for more MT perf on desktop PCs.
That's not the point and I think you know. You say there is stagnation comparable to Intel. In 2018 a tablet launched that was faster in MT than the best non-HEDT part from Intel only 18 months earlier. That's stagnation of MT performance. If AMD is stagnating in MT then the entire industry is stagnating with them :)
 
  • Like
Reactions: delta-v

Fjodor2001

Diamond Member
Feb 6, 2010
4,208
583
126
most of their current AM5 sales concentrate on the 7800X3D, the 7950X3D sell in a 0.15-0.2x ratio comparatively and the 7950X is even below the latter.
That’s still a substantial portion, especially if you add sales of both the 16C CPUs (7950X and 7950X3D) together. You said even AMD was surprised how well the 3950X (and I assume 16C later versions) are selling.

Also, note that not everyone has to buy the top end SKU for it to be justified. Those that only want 8C/16C can still buy that even if AMD introduce 24C/32C variants.
 

JustViewing

Senior member
Aug 17, 2022
269
473
106
GPUs and dedicated Neural or AI unit can perform the same tasks. What makes the dedicated parts more optimal is being more power efficient. Which is a thing in notebooks.

But it seems you are talking about desktop, where using GPU for the same task is not a problem. Some memory inefficiency is not a big problem, considering how much power (to perform AI tasks) you get from the desktop GPU.

If you are thinking about the dinky NPU units Intel and AMD have added to their notebooks, they are a fraction of capability of a desktop GPU. There is not really a strong reason to have an NPU + GPU in the same desktop system. It's just duplication and waste.

Workstation GPUs are another alternative, but probably more expensive per unit of performance than gaming GPUs.
While I don't dispute your claim, not everybody has good GPU. Most developers still use just a basic GPU or built-in ones. Having a dedicated AI co-processor will have greater reach than discreet GPU. I was talking about AI coprocessor in the size of 16core CCD.
I admit, I don't know how a hypothetical AI processor like that would perform. Just my wishful thinking after playing around with local LLMs.