Discussion Zen 5 Speculation (EPYC Turin and Strix Point/Granite Ridge - Ryzen 9000)

Page 273 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

S'renne

Member
Oct 30, 2022
149
108
86
You are just speculating.

P.S. If there will be a replacement for Strix Halo - it will use 9600 LPCAMM2 LPDDR5 memory, because that is the next logical step.
With Infinity Cache that should somewhat help but 8500 LPDDR5 should be the safest bet in costs imo
 
  • Like
Reactions: Joe NYC

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
With Infinity Cache that should somewhat help but 8500 LPDDR5 should be the safest bet in costs imo
Thats the memory that Strix Halo already will use.

Next gen only increase the requirement for memory bandwidth.
 
  • Like
Reactions: S'renne

FlameTail

Diamond Member
Dec 15, 2021
4,384
2,761
106
Which makes sense.

LPDDR5T/5X-9600 is simply an extension of the LPDDR5X-8533 standard.

LPDDR6 on the other hand will be a whole new standard.

So an LPDDR5X-8533 controller can easily be overclocked to run 9600 but not LPDDR6- as that would require a new controller design.
 

Tigerick

Senior member
Apr 1, 2022
844
797
106
You are just speculating.

P.S. If there will be a replacement for Strix Halo - it will use 9600 LPCAMM2 LPDDR5 memory, because that is the next logical step.
Everybody is speculating at this point, it is just a different in memory interface.

I have calculated the memory bandwidth between 256-bit CAMM2 9600 and 192-bit LPDDR6-12800, you know what? They are the same. Thus it is up to OEMs to decide which one to implement.

I know Apple and Qualcomm are not going to implement LPCAMM2. The rest is up to OEMs, how about we see how many OEMs are using LPDDR5X or LPCAMM2 when AMD launches Sarlak then we will know better...
 
  • Like
Reactions: Tlh97
Jul 27, 2020
27,948
19,100
146
The rest is up to OEMs, how about we see how many OEMs are using LPDDR5X or LPCAMM2 when AMD launches Sarlak then we will know better...
LPCAMM2 may be used by workstation or business class laptops where there is a real need for upgrading RAM in future due to growing needs. LPDDR5X for home usage laptops.
 
  • Like
Reactions: Tlh97 and Tigerick

FlameTail

Diamond Member
Dec 15, 2021
4,384
2,761
106
I have calculated the memory bandwidth between 256-bit CAMM2 9600 and 192-bit LPDDR6-12800, you know what? They are the same. Thus it is up to OEMs to decide which one to implement.
Why calculate?
Simple logic gives the answer.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
Everybody is speculating at this point, it is just a different in memory interface.

I have calculated the memory bandwidth between 256-bit CAMM2 9600 and 192-bit LPDDR6-12800, you know what? They are the same. Thus it is up to OEMs to decide which one to implement.

I know Apple and Qualcomm are not going to implement LPCAMM2. The rest is up to OEMs, how about we see how many OEMs are using LPDDR5X or LPCAMM2 when AMD launches Sarlak then we will know better...
No.

Its up to AMD which MEMORY CONTROLLER to implement in Strix Point replacement.

If Zen 6 is designed to work with DDR5 - thats what you will get. End of story.
 

FlameTail

Diamond Member
Dec 15, 2021
4,384
2,761
106
Then 256-bit bus trace vs 192-bit which one is cheaper to implement, simple huh?
That's not the point.

The point is we have:

LPDDR6-12800 + 192 bit
LPDDR6-9600 + 256 bit

One has 33% more transfer rate, but 25% less bus width. It should be immediately obvious that the two solutions have the same bandwidth. No need for calculations.
 
  • Like
Reactions: Tlh97

TESKATLIPOKA

Platinum Member
May 1, 2020
2,696
3,260
136
You sure?

:p
Also, 760M is just 15% slower than 780M while having 33% less CUs, than 780M.

If this is not memory starvation - I do not know what is.
DDR5-5200 -> DDR5-7200 resulted in only 7% increase according to Computerbase despite 38.5% faster memory.
If It was heavily BW starved, then the improvement wouldn't be so low.
Because AMDs APUs are Bandwith starved for 2 Years now?! It started with Rembrandt and now Phoenix is worse. Pheonix iGPU is clocked over 15% higher than Rembrandts and uses RDNA3, meanwhile it's not even 10% faster with same RAM and barely 15% with faster RAM. Also OEMs will always cheap out on faster RAM, so don't expect that every device comes with LPDDR5x-8533. New Tests of Desktop Versions also show that 7200 brings nearly no difference compared to 5200 (only about 7%). All this leads to the conclusion that Strix will be Bandwith starved to the moon.
38.5% faster memory results in only 7% better performance, and for you that means Strix will be heavily BW starved? Why? It should mean something else.
8533 vs 5200 offers 64% more BW.
BTW, I never said every device will come with 8533, but the good ones should use that one. If someone buys a device with slower memory, then It's their own fault.
 
Last edited:

FlameTail

Diamond Member
Dec 15, 2021
4,384
2,761
106
BTW, I never said every device will come with 8533, but the good ones should use that one. If someone buys a device with slower memory, then It's their own fault.
Another (+) for on-package memory. The CPU/APU maker can dictate the most suitable memory type. No more OEMs pulling shenanigans with the RAM.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
DDR5-5200 -> DDR5-7200 resulted in only 7% increase according to Computerbase despite 38.5% faster memory.

38.5% faster memory results in only 7% better performance, and for you that means Strix will be heavily BW starved? What? It should mean the opposite.
8533 vs 5200 offers 64% more BW.
BTW, I never said every device will come with 8533, but the good ones should use that one. If someone buys a device with slower memory, then It's their own fault.
The small difference is because the bandwidth is shared with the CPU.

Its never solely reserved for the GPU.
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,696
3,260
136
The small difference is because the bandwidth is shared with the CPU.

Its never solely reserved for the GPU.
BW is shared in both cases!
Doesn't matter If It's DDR5-5200 or DDR5-7200.
Yet you want us to believe that with increased BW the CPU will consume a bigger junk of It.
 
Last edited:
  • Like
Reactions: Tlh97 and Executor_

Abwx

Lifer
Apr 2, 2011
11,884
4,873
136
Pretty sure he's guessing. MB manufactures can just reuse their current boards and just add 'now supports Zen XXXX CPUs!!!" Or, call them x670E Super Overdrive Extreme or some other marketing gobbilty beloved patriot.

Of course, but anyone here could "predict" that AMD will take advantage tof Zen 5 to release a new chipset to cash some money with the novelty, even an update of the I/O device would be no surprise since that s a trivial task in comparison of a CPU design.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
BW is shared in both cases!
Doesn't matter If It's DDR5-5200 or DDR5-7200.
Yet you want us to believe that with increased BW the CPU will consume a bigger junk of It.
In 8600G you have less cores competing for the same resources, on both CPU and GPU.

Hence why GPU is only 15% slower, despite losing 33% of CUs, and hence why in 8700G you do not get any benefit from higher memory clocks.

That bandwidth is eaten by the CPU, and additional 2 cores.
 
Jul 27, 2020
27,948
19,100
146
even an update of the I/O device would be no surprise since that s a trivial task in comparison of a CPU design.
Umm...getting bug-free USB 3 hasn't been trivial for them :p

But maybe the 700 series chipsets finally get it right.
 

Abwx

Lifer
Apr 2, 2011
11,884
4,873
136
Umm...getting bug-free USB 3 hasn't been trivial for them :p

But maybe the 700 series chipsets finally get it right.

There s no USB bug on the APUs, on MBs there s often third party USB chips from ASMedia, generaly those so called bugs are not even related to the chips but to the MBs layout.

5-10Gb/s is not a trivial speed, this require well designed layouts to have efficient transmission lines with no stationary waves, that s a well known
issue in high frequency design.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,205
580
126
What’s the source for the statement that mass production started Oct-Nov 2023? You mentioned the Greymon(s), but do you have any link to that?

An admin from chinese forum, but there's so many slang and it's hard to translated by googlish I perfer not to attach links before.
https://tieba.baidu.com/p/8718620228 2023-11-14 <mass production start>
https://tieba.baidu.com/p/8700302663 2023-11-07
https://tieba.baidu.com/p/8605856969 2023-09-18
AFAIK he's the only one who have a contact to factory just like Greymon, who decided to quit twitter due to RDNA3 perf misprediction.
So mass production of Zen5 DT started in Oct-Nov 2023. Then why is it now not expected to be released until 2024H2 (likely meaning Nov-Dec 2024, or they would have said 2024Q3).

Are we expecting ~12 months from start of mass production until release for Zen5?
 

StefanR5R

Elite Member
Dec 10, 2016
6,666
10,545
136
DDR5-5200 -> DDR5-7200 resulted in only 7% increase according to Computerbase despite 38.5% faster memory.
So we know that one memory has got 1.385 times the clock compared to the other.
But who can tell us at which ratio the effective bandwidth between CUs and memory was increased?

[And then, the memory access patterns of GPU and of CPU in Computerbase's tests need to be characterized…]
 

adroc_thurston

Diamond Member
Jul 2, 2023
7,058
9,797
106
So mass production of Zen5 DT started in Oct-Nov 2023. Then why is it now not expected to be released until 2024H2 (likely meaning Nov-Dec 2024, or they would have said 2024Q3).

Are we expecting ~12 months from start of mass production until release for Zen5?
you're shadowboxing nonexistent stuff again