Discussion Zen 5 Speculation (EPYC Turin and Strix Point/Granite Ridge - Ryzen 9000)

Page 106 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
Sure, if you can afford a TR 3990X or 5995WX system.
Even a zen 16 core is plenty fast. The 16 was what broke intel's back because they used to have high core counts on their hedt lineup. amd ran a lorry over intel's head, backed it up and ran it over again.

judging by the abysmal arrow lake leaks based on an es and what we know about zen 5, amd will be running over intel's gonads now.
 

soresu

Diamond Member
Dec 19, 2014
4,105
3,566
136
If you want the most power efficient video encoder you buy that AMD/Xilix hardware. 1500 bones gets you something more efficient than the two.
My main point was that GPU general compute µArch has limited uses where it is obviously better than a CPU for power efficiceny.

Video, audio and image codec ASIC blocks bolted onto the GPU design are even more limited to the point that they cannot serve any function beyond the spec of each codec - although some such solutions do optimise the ASIC to prevent unnecessary function duplication between different codecs.

ASIC's give better perf/W in exchange for total reliance on the ODMs codec implementation which varies wildly from pants to meh compared to the best x264, x265 or libaom software solutions can offer for a given bitrate.

Sadly GPU general compute can't even run those complex programs without choking like a man trying to swallow a crushed glass smoothie.

I think an FPGA configuration could offer a nice middle ground, but I've yet to see such a system demonstrated - I'm not even sure what the AMD/Xilinx solution is actually using, as despite the Xilinx name they didn't take any pains to highlight it as FPGA based hardware.
 

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
My main point was that GPU general compute µArch has limited uses where it is obviously better than a CPU for power efficiceny.

Video, audio and image codec ASIC blocks bolted onto the GPU design are even more limited to the point that they cannot serve any function beyond the spec of each codec - although some such solutions do optimise the ASIC to prevent unnecessary function duplication between different codecs.

ASIC's give better perf/W in exchange for total reliance on the ODMs codec implementation which varies wildly from pants to meh compared to the best x264, x265 or libaom software solutions can offer for a given bitrate.

Sadly GPU general compute can't even run those complex programs without choking like a man trying to swallow a crushed glass smoothie.

I think an FPGA configuration could offer a nice middle ground, but I've yet to see such a system demonstrated - I'm not even sure what the AMD/Xilinx solution is actually using, as despite the Xilinx name they didn't take any pains to highlight it as FPGA based hardware.
The Xilinx AMD card is not an fpga to my knowledge based on the material i've see of the ma35d. Your example while a sound theory is a stretch. how many new codecs are on he horizon? h.265 despite it's many flaws is still sticking around and the lazy slow adoption of av1 by major players shows very few of them care to do the leg work. h.266/vvc is the next big jump but who knows when that'll show up on the scene.
 

soresu

Diamond Member
Dec 19, 2014
4,105
3,566
136
Even a zen 16 core is plenty fast
It's decent sure, but I have a 3950X myself - even encoding just 1080p with x265 on reasonable settings it isn't exactly blowing my eyebrows off.

With libaom it's even worse 😭

I can't even imagine how bad AV2 is going to be when it's first standardised 😅
 

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
It's decent sure, but I have a 3950X myself - even encoding just 1080p with x265 on reasonable settings it isn't exactly blowing my eyebrows off.

With libaom it's even worse 😭

I can't even imagine how bad AV2 is going to be when it's first standardised 😅
3950x was good but the 5950x was much better. this is the firs ttime I'm reading of av2. I'd read into it but I'm dozing off due to having a few hot toddies because of a sore throat. must have overdone it with the ice cold rose last weekend.
 

soresu

Diamond Member
Dec 19, 2014
4,105
3,566
136
h.266/vvc is the next big jump
It's already finalised since July 2020.

It's successor ECM / h267 is in the works as we speak.

They are talking about a whopping 10x increase in decoder complexity to the point that ASIC is likely to be the only sensible way to go for consumers, with chunked compute on 64C+ server systems the only viable solution for encoding.

I'd say we are well past the point on diminishing returns with that one.
 

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
It's already finalised since July 2020.

It's successor ECM / h267 is in the works as we speak.

They are talking about a whopping 10x increase in decoder complexity to the point that ASIC is likely to be the only sensible way to go for consumers, with chunked compute on 64C+ server systems the only viable solution for encoding.

I'd say we are well past the point on diminishing returns with that one.
still needs to be adopted by the software side of the equation. that'll be a while. even at that it'll be a long time before either of them become adopted by the general public. large scale conversion and encoding outfits may adopt it. it's too complex of a situation to say one thing or another about the future prospects.
 

soresu

Diamond Member
Dec 19, 2014
4,105
3,566
136
this is the firs ttime I'm reading of av2
AV2 has been in the works for years.

The AOM people just don't talk about it much.

Especially as AV1 is still less than ubiquitous for the time being with Qualcomm and Apple's reticence to implement it.

Here's a link to the main AV2 code git repo.
 
  • Like
Reactions: Tlh97 and dr1337

soresu

Diamond Member
Dec 19, 2014
4,105
3,566
136
still needs to be adopted by the software side of the equation. that'll be a while. even at that it'll be a long time before either of them become adopted by the general public. large scale conversion and encoding outfits may adopt it. it's too complex of a situation to say one thing or another about the future prospects.
I doubt that VVC will find much use outside of digital broadcast video transmission standards like ATSC or DVB which both seem perfectly happy with proprietary standards no matter how problematic the patent encumberment becomes.

As you say ECM is still way too early to call, but given the low overall enthusiasm around VVC and the speed at which they launched the ECM endeavor after it I'd warrant that its prospects are not great.

As for the big streamers they do not wish to be beholden to patent licensing costs which is why so many of them signed up to the AOM development efforts - some like Netflix have even sponsored AOM dev events with their own speakers giving keynotes.
 

Doug S

Diamond Member
Feb 8, 2020
3,575
6,312
136
It's already finalised since July 2020.

It's successor ECM / h267 is in the works as we speak.

They are talking about a whopping 10x increase in decoder complexity to the point that ASIC is likely to be the only sensible way to go for consumers, with chunked compute on 64C+ server systems the only viable solution for encoding.

I'd say we are well past the point on diminishing returns with that one.

Hasn't every successive MPEG standard been around 10x increase in complexity? In the beginning they assumed improvements in clock speed would provide that, but when those increases moderated dedicated hardware became their expectation. As an inherently parallel problem throwing more transistors at the dedicated hardware addresses it. Fortunately while new processes haven't delivered nearly the performance gains they used to we are still getting pretty decent improvements in logic density.

So I feel confident in expecting Apple will support VVC, and it is a pretty safe bet Qualcomm will follow a little later like they did with HEVC. I agree that we're reaching the point of diminishing returns, at least for consumer products. Internet and cellular speeds continue to increase so there is little demand for better in transit compression at the edges, but it does help with storage as smartphone videos get more pixels and higher frame rates (and probably perhaps utilized more) that go beyond the decreases in NAND prices allowing for more local storage.

I agree with you that h.267 may never reach consumer products since the demand for increases in pixels and frame rates in consumer hardware are probably plateauing so before h.267 arrives we may have reached a point where further video compression would benefit too small a percentage of smartphone customers for it to be worth implementing.
 
  • Like
Reactions: Tlh97 and Schmide

soresu

Diamond Member
Dec 19, 2014
4,105
3,566
136
Hasn't every successive MPEG standard been around 10x increase in complexity?
Encode wise yes - the complexity increase is usually pretty huge, and cripplingly so before the first optimised software codec implementations exist.

Decode wise no - it is usually a focus to keep decode complexity increases to a minimum.

so before h.267 arrives we may have reached a point where further video compression would benefit too small a percentage of smartphone customers for it to be worth implementing
You misunderstand the greater benefits of compression for the content creation and delivery side which are bandwidth consumption to the big streamers.

At the end of the day what the consumers are taking in at the client end is a tiny drop in an ocean compared to what the streaming content creators are pushing out every minute worldwide.

Even a 5% decrease for the same quality is something they would get behind as long as the compute cost wasn't way out there beyond what they already have.
 

Doug S

Diamond Member
Feb 8, 2020
3,575
6,312
136
You misunderstand the greater benefits of compression for the content creation and delivery side which are bandwidth consumption to the big streamers.

At the end of the day what the consumers are taking in at the client end is a tiny drop in an ocean compared to what the streaming content creators are pushing out every minute worldwide.

Even a 5% decrease for the same quality is something they would get behind as long as the compute cost wasn't way out there beyond what they already have.

But Netflix et al can't dictate to customers that they must use hardware that supports a certain compression scheme. They can't shut off MPEG4 support tomorrow to reduce their bandwidth consumption. They don't have any way to encourage customers to connect to their service with something capable of using VVC.

All they can do is support VVC and use it when that hardware reaches the consumer market. But if that hardware NEVER reached the consumer market (if Apple and Qualcomm and Broadcom said "no thanks not worth the bother") then Netflix could support VVC all they want and even offer discounts for customers using it, but those customers wouldn't have any way to do so with their smartphone, set top box or smart TV.
 

soresu

Diamond Member
Dec 19, 2014
4,105
3,566
136
But Netflix et al can't dictate to customers that they must use hardware that supports a certain compression scheme. They can't shut off MPEG4 support tomorrow to reduce their bandwidth consumption. They don't have any way to encourage customers to connect to their service with something capable of using VVC.
Not quite the way it works - it's not an all or nothing problem.

Say with their AV1 rollout they probably will have started encoding content with it at production level after they determined supporting SoCs in the relevant market segments had shipped x million units.

Enough to make it worth adding extra server infrastructure for the task to benefit from the bandwidth drop.

If you want to talk any further about this I would suggest setting up another thread elsewhere - I've derailed the topic a bit 😅
 

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
tbf mate I've got no recollection of even making those posts this morning. Strength of this cough syrup tincture is a little too high for me.

What I will say is while I do appreciate your input sor, it's gonna be a long time until the general public or even pirates begin using such codecs. Far more important problems to deal with than codecs at the mo.
 

Tup3x

Golden Member
Dec 31, 2016
1,272
1,405
136
AV2 has been in the works for years.

The AOM people just don't talk about it much.

Especially as AV1 is still less than ubiquitous for the time being with Qualcomm and Apple's reticence to implement it.

Here's a link to the main AV2 code git repo.
Snapdragon 8 Gen 2 has AV1 support.
 
  • Like
Reactions: Tlh97 and soresu

Tigerick

Senior member
Apr 1, 2022
847
799
106
RGT has brief info regarding upcoming Sarlak platform by ASUS TUF series. Based on my understanding on ASUS, here are my speculations on upcoming TUF A16 series to let you guys know more about AMD's new platform.

TUF A16 2023TUF A16 2025 Base ModelTUF A16 2025 Halo Model
AMD Ryzen 7 7735HS
  • 8xZen3
  • 16MB L3 Cache
  • 7600S 8GB GDDR6
AMD Ryzen 9
  • 12xZen5
  • 32MB L3 Cache
  • RDNA3+ 16WGP, 2048ALU with 24MB IC?
AMD Ryzen 9
  • 16xZen5
  • 32MB L3 Cache
  • RDNA3+ 20WGP, 2560ALU with 32MB IC
128-bit DDR5 16GB192-bit LPDDR5x 24GB256-bit LPDDR5x 32GB
16" FHD+ 1920x1200 165Hz16" FHD+ 1920x1200 165Hz16" QHD+ 2560x1600 240Hz
512GB SSDat least 1TB SSDat least 2TB SSD
90 WHrs100 WHrs100 WHrs
200W AC Adapter??
$799Estimated Price Range: $999$1,299

@TESKATLIPOKA Does the prices make sense to you? ;)
 
Last edited:

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
yep, the dimensity 1000 was the first to have decode a few years ago, and then exynos and now snapdragon. all three support up to 8k h265 encode if im not mistaken. even chip middling phones have had hevc encoding for several years now.
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,696
3,260
136
RGT has brief info regarding upcoming Sarlak platform by ASUS TUF series. Based on my understanding on ASUS, here are my speculations on upcoming TUF A16 series to let you guys know more about AMD's new platform.

Base ModelHalo Model
AMD Ryzen 9 8850HX
  • 6xZen5 + 8xZen5c
  • 32MB L3 Cache
  • RDNA3+ 16CU, 2048ALU ?
AMD Ryzen 9 8950HX
  • 8xZen5 + 8xZen5c
  • 40MB L3 Cache
  • RDNA3+ 20CU, 2560ALU with 32MB IC
192-bit LPDDR5x 24GB256-bit LPDDR5x 48GB
16" FHD+ 1920x1200 165Hz16" QHD+ 2560x1600 240Hz
at least 1TB SSDat least 2TB SSD
Estimated Price Range: $999 - $1,299$1,299 - $1,599

@TESKATLIPOKA Does the prices make sense to you? ;)
Isn't that price for chinese market? International would be another $200-300? Hard to tell If It's good or not.
I am confused about something else. :D

Base model is Strix Point or Strix Halo(Sarlak)?
It supposedly has 6xZen5 + 8xZen5c which looks like a cutdown Sarlak, but only 16CU and no IC looks like Strix Point.

Halo model has 32MB IC but only 20CU? What happened to 40CU IGP?
Then why use 192-256bit LPPDR5x for only 16-20CU IGP? It looks like an overkill compared to Phoenix. Maybe higher clocks could explain It, but not really the extra 32MB IC in 8950HX.

It looks like Zen5 has 4MB L3, but Zen5C only 1MB. Of course It's a victim case, so each core can use the whole cache.

192-bit LPDDR5x 24GB for base model is 6* 32gbit chips or 3* 64gbit chips?
256-bit LPDDR5x 48GB for base model is 8* 48gbit chips or 4* 96gbit chips?
BTW 8533 Mbps modules from Samsung are only in 64,96 and 128gbit densities. 7500 Mbps is from 16gbit -> 128gbit.

So many questions unanswered. :)

edit:
@igor_kavinski $899 would be too little for what 8850HX is offering, then 8950HX at $1599 wouldn't be worth It.
For extra $300 you gain +1TB SSD, +24GB RAM, +2 extra CPU cores, 25% bigger IGP, 32MB IC, better display.
That's very good offer.
The question is If after release, the difference in cost really will be only $300.
 
Last edited:
  • Like
Reactions: Tlh97 and Saylick

TESKATLIPOKA

Platinum Member
May 1, 2020
2,696
3,260
136
@Tigerick
I see you added IC also for the base model, but where did you see this Asus TUF Sarlak?
I am looking at RGT channel and I don't see anything like that there.
BTW what is your assumption and what the claims from RGT? By looking at his videos, It looks like he knows absolutely nothing and he is purely guessing.
 

Tigerick

Senior member
Apr 1, 2022
847
799
106
Isn't that price for chinese market? International would be another $200-300? Hard to tell If It's good or not.
I am confused about something else. :D

Best Buy USA currently listed ASUS TUF A16 with 7600S 8GB @ $799 (drop from $1,099), seems like this is the model will be replaced by Sarlak base model.
Base model is Strix Point or Strix Halo(Sarlak)?
It supposedly has 6xZen5 + 8xZen5c which looks like a cutdown Sarlak, but only 16CU and no IC looks like Strix Point.

Halo model has 32MB IC but only 20CU? What happened to 40CU IGP?
Then why use 192-256bit LPPDR5x for only 16-20CU IGP? It looks like an overkill compared to Phoenix. Maybe higher clocks could explain It, but not really the extra 32MB IC in 8950HX.

Base model will most likely based on cut down version of Sarlak, not really sure about CU/IC amount but 192-bit memory bus make sense with 24GB LPDDR5x support. It is like AMD move the discreet GPU with 8GB GDDR6 into Sarlak APU.

I never believe in 40CU of RDNA3+; even with 256-bit LPDDR5x-8533, total memory bandwidth is around 272GB/s which is similar to 7600S with 32CU. Remember APU have to share memory bandwidth with CPU and GPU, thus I believe with real dual issue ALU, Sarlak will most likely having 20CU or WGP with 2560 ALU...

Maybe @adroc_thurston can provide some insights?

It looks like Zen5 has 4MB L3, but Zen5C only 1MB. Of course It's a victim case, so each core can use the whole cache.

192-bit LPDDR5x 24GB for base model is 6* 32gbit chips or 3* 64gbit chips?
256-bit LPDDR5x 48GB for base model is 8* 48gbit chips or 4* 96gbit chips?
BTW 8533 Mbps modules from Samsung are only in 64,96 and 128gbit densities. 7500 Mbps is from 16gbit -> 128gbit.

Memory density is unknown atm, 48GB seems high at first (this is the amount mentioned by RGT in his latest video regarding Sarlak), but it make sense if ASUS wants to create Halo Model with full Sarlak platform... and the RAM is no longer upgradeable :eek:
 
Last edited:
  • Like
Reactions: Tlh97

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
halo model my sweet patootie. odms need to stop being cheap and giving scraps for ram to consumers. more ram is good even if you don't utilise it, windows will. it makes everything go like buttah.
 
  • Haha
Reactions: Tlh97 and Thibsie

Tigerick

Senior member
Apr 1, 2022
847
799
106
8950HX makes the 8850HX look pretty bad in comparison. Considering what you are getting for $1599 with the 8950HX, the 8850HX shouldn't be one cent above $899.
Hmm, the issue is with around $1,500, we have much better choice than large iGPU. I have updated the table to include ROG Strix G17 with Fire Range 12 Zen5 cores and RTX4070 GPU. Based on current pricing, the upcoming G17 with full fat Zen5 cores will offer much better graphics performance at slightly higher price. Would you rather go for RTX4070 or iGPU?

And if AMD manages to launch mobile N43/N44 along with Fire Range, then we have another choice of discreet GPU. Based on leaks, Sarlak has around 63% of CUs of mobile N44, you could guess how much performance difference between two...