Discussion AMD Cezanne/Zen 3 APU Speculation and Discussion

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

dr1337

Senior member
May 25, 2020
333
565
106
Fresh leak out today, not much is known but at least 8cu's is confirmed. Probably an engineering sample, core count is unknown and clocks may not be final.

This is very interesting to me because cezanne is seemingly 8cu only, and it seems unlikely to me that AMD could squeeze any more performance out of vega. A cpu only upgrade of renoir may be lackluster compared to tigerlake's quite large GPU.

What do you guys think? Will zen 3 be a large enough improvement in APU form? Will it have full cache? Are there more than 8cus? Has AMD truly evolved vega yet again or is it more like rdna?
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,356
2,848
106
First I want to see specs of mobile Navi and mobile Ampere, then we will see If AMD delivers much better perf/W or not.
To be competitive AMD will need higher clocks than Nvidia, so I wouldn't be so sure AMD will be much more efficient than It's competition.
At least we don't need to worry about CPU.
 
Last edited:

moinmoin

Diamond Member
Jun 1, 2017
4,952
7,661
136
Interesting. OEMs usually reserve the high-end NV mGPUs for Intel chips.
Last month wccftech (yeah...) reported this:
"Because of the PCIe bottleneck on Renoir, you could only get up to an RTX 2060 with an AMD CPU. Once the AMD 5000 series (Cezzane-H) lands, however, this is going to change with NVIDIA high-end GPUs freely available equally on AMD and Intel parts (TigerLake H)."
🤷
 

uzzi38

Platinum Member
Oct 16, 2019
2,632
5,952
146
Last month wccftech (yeah...) reported this:
"Because of the PCIe bottleneck on Renoir, you could only get up to an RTX 2060 with an AMD CPU. Once the AMD 5000 series (Cezzane-H) lands, however, this is going to change with NVIDIA high-end GPUs freely available equally on AMD and Intel parts (TigerLake H)."

...

Cezanne is only PCIe 3.0x8 on FP6 as well. It's an FP6 limitation, it won't change until FP7.

It had nothing to do with that, only manufacturer's lack of faith in AMD (for good reason - AMD's mobile game up until recently was horrid).

But yeah, this year will be good for AMD devices.
 

moinmoin

Diamond Member
Jun 1, 2017
4,952
7,661
136
...

Cezanne is only PCIe 3.0x8 on FP6 as well. It's an FP6 limitation, it won't change until FP7.

It had nothing to do with that, only manufacturer's lack of faith in AMD (for good reason - AMD's mobile game up until recently was horrid).

But yeah, this year will be good for AMD devices.
Yeah, the PCIe stuff is nonsense, though it would be really interesting to know who exactly was limiting those Nvidia models. We always talked about the OEM/ODM doing that, this report while technically false suggests it's Nvidia's doing.
 
  • Like
Reactions: Tlh97 and soresu

Asterox

Golden Member
May 15, 2012
1,026
1,775
136

pman6

Junior Member
Oct 10, 2011
18
1
71
i wonder how intel Xe will compare to cezanne vega.
obviously you're not gaming for shit on either one

i'm disappointed that cezanne won't be future proof with lack of AV1 hardware decoding

4k60 av1 videos are brutal on the cpu.

i'm leaning toward intel now because of av1 support
 

uzzi38

Platinum Member
Oct 16, 2019
2,632
5,952
146
i wonder how intel Xe will compare to cezanne vega.
obviously you're not gaming for shit on either one

i'm disappointed that cezanne won't be future proof with lack of AV1 hardware decoding

4k60 av1 videos are brutal on the cpu.

i'm leaning toward intel now because of av1 support
I actually like what AV1 brings, but unfortunately we can't call anything with support for it future proof.

The reason is Qualcomm and Apple. Neither are choosing to support AV1 for some reason despite one of them technically having support in hardware. With smartphone's not supporting the standard, adoption is going to be hard.

Which is annoying because it seems like a legitimately good standard.
 
  • Like
Reactions: moinmoin

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,691
136
i'm disappointed that cezanne won't be future proof with lack of AV1 hardware decoding

That is a bit of a bummer, but perhaps AMD will do what they did with VP9, and decode using shaders. It's not as efficient as dedicated hardware, but still better then using the CPU.

AV1 isn't that different from VP9 after all.
 

soresu

Platinum Member
Dec 19, 2014
2,662
1,862
136
i'm disappointed that cezanne won't be future proof with lack of AV1 hardware decoding
For the sake of mobile use and battery consumption it is a bother certainly, but for basic decoding ability 8 bpc content is already easy with Zen1 8C CPU's up to 4K24 at least with dav1d, VideoLAN's speedy AV1 decoder.

My R7 1700 can easily handle 8 bpc content excepting 8K, and even 4K24 with 10 bpc if I close all other apps and tabs in Firefox - so you can imagine a Zen2, let alone a Zen3 8C doing much better.

The only thing holding up 10 bpc content playability at the moment is a lack of SIMD optimisations for all x86 platforms.

This is finally starting to be addressed as of a few days ago with some parts of AV1 being vectorized as AVX2 assembly:

10 bpc avg/mask/w_avg, 10 bpc blend{,_h/v}, 10 bpc 8tap put/prep

Even those 3 optimisations will probably be enough to render 10 bpc 4K24 pretty smooth on my R7 1700 and a Cezanne APU should do far better with proper 256 bit AVX2 execution since Zen2 generation.
 

soresu

Platinum Member
Dec 19, 2014
2,662
1,862
136
AV1 isn't that different from VP9 after all.
It is, I mean it really is where encoding is concerned.

Even decoding is more complex than VP9 by a good deal, AV1 is just more amenable to parallelisation than VP9 was, so the decoder hit is not so bad at all with well written code like VideoLAN's dav1d.

The overall codec itself is just a whole lot more to cover than VP9 because so many different industry sources contributed to it beyond just Google/On2.

Even nVidia and AMD contributed to AV1's development directly, rather than simply creating HW ASIC decoders from the post standardised spec after the fact, as with HEVC and AVC before that.
 

soresu

Platinum Member
Dec 19, 2014
2,662
1,862
136
The reason is Qualcomm and Apple. Neither are choosing to support AV1 for some reason despite one of them technically having support in hardware. With smartphone's not supporting the standard, adoption is going to be hard.
Qualcomm more so than Apple.

Whatever stake Apple has, Qualcomm is trying to push a lateral codec move for the new EVC, along with Huawei and Samsung.

Either way ARM64 is optimised completely for AV1 with NEON SIMD assembly code.

Unless you are trying to decode 4K on a mobile phone it should not overly tax the batteries on a state of the art SoC from either Qualcomm or Apple if you are using a video app with the dav1d decoder integrated, which now includes the coming v19 of Kodi as well as the latest VLC mobile versions.
 

pman6

Junior Member
Oct 10, 2011
18
1
71
For the sake of mobile use and battery consumption it is a bother certainly, but for basic decoding ability 8 bpc content is already easy with Zen1 8C CPU's up to 4K24 at least with dav1d, VideoLAN's speedy AV1 decoder.


even my old i7-6700hq plays AV1 4k24 no problem though, but with 50% cpu load.

for me, the bar needs to be 4k60 AV1, now that youtube and netflix are pushing for av1.


I'm itching to build a pc for my mom, who uses an ancient athlon64 x4.
She uses the pc 12 hours a day, and the pc idles at 77 watts.
we pay 22 cents/kwh, so that amounts to wasting >$75 per year

in contrast, a Renoir pc idles at 7 watts. So that's a huge savings.
I need a cpu soon to offset electricity cost.

If i wait 2 years for rembrandt, I will have flushed $150+ down the toilet in electricity cost.
That's the cost of a new cpu already
 

jpiniero

Lifer
Oct 1, 2010
14,594
5,215
136
even my old i7-6700hq plays AV1 4k24 no problem though, but with 50% cpu load.

for me, the bar needs to be 4k60 AV1, now that youtube and netflix are pushing for av1.


I'm itching to build a pc for my mom, who uses an ancient athlon64 x4.
She uses the pc 12 hours a day, and the pc idles at 77 watts.
we pay 22 cents/kwh, so that amounts to wasting >$75 per year

in contrast, a Renoir pc idles at 7 watts. So that's a huge savings.
I need a cpu soon to offset electricity cost.

Just buy her a laptop.
 

pman6

Junior Member
Oct 10, 2011
18
1
71
a laptop is a waste because the screen is too small.

I can use recycled parts to build a ryzen pc for under $250. I only need cpu mobo and ram. I have all the other parts.

can't hit that price target with any laptop.
 

soresu

Platinum Member
Dec 19, 2014
2,662
1,862
136
If i wait 2 years for rembrandt, I will have flushed $150+ down the toilet in electricity cost.
The Van Gogh APU will have AV1 HW decode in 2021, but I've no idea if that is anything but a custom SoC for something like MS Surface devices.

Also both Renoir and Cezanne have 8C at 15-35w.

Even without HW ASIC decode they should still give great perf/watt once dav1d is fully optimised for 10 bpc content, though most Youtube AV1 content is currently 8 bpc anyway.

The fact that Netflix bankrolled the 10 bpc ARM64 NEON assembly for dav1d shows that they don't necessarily care about waiting for HW ASIC decoding anyway - they are more concerned with bandwidth than battery life consumption, especially at the moment with vastly increased remote workers taking up further bandwidth vs the start of 2020.
 

mikk

Diamond Member
May 15, 2012
4,140
2,154
136
Perf/Watt won't be great on CPU and even a GPU Hybrid is a half baked solution. With a fixed function unit both CPU and GPU utilization is super low.
 

soresu

Platinum Member
Dec 19, 2014
2,662
1,862
136
Perf/Watt won't be great on CPU and even a GPU Hybrid is a half baked solution.
If you are already running a browser chances are there are a multitude of things like JS scripts already fighting to do the nasty on your system perf/watt, regardless of what is performing the video decoding compute.

Either way whether it is CPU, GPU or both, having something is still better than nothing.

To tell the truth is do find it odd that Rembrandt info is floating around so early if it is due late 2021/early 2022.

Of course there are also mobile dGPU options inevitably coming from both nVidia RTX 3xxx and AMD RX 6XXX that support AV1 HW decode.

Certainly we will see those combos in laptops and some NUCs a lot sooner than we can expect Rembrandt, and far more certain to actually be available 'off the shelf' as it where than Van Gogh is likely to be if it is indeed custom.
 

jpiniero

Lifer
Oct 1, 2010
14,594
5,215
136

Videocardz has what looks like a die render of Cezanne. Their estimate is that Cezanne is 10% bigger than Renoir.
 

HurleyBird

Platinum Member
Apr 22, 2003
2,684
1,268
136

Videocardz has what looks like a die render of Cezanne. Their estimate is that Cezanne is 10% bigger than Renoir.

Those PR renders are ever only ever... haphazardly accurate... not just how they're filled in, but even to the point that the overall shape of the die is wrong. For example, the real Renoir die is taller than the render, so even though 10% sounds reasonable, it's not something I would ever advise extrapolating from these renders. That said, the Cezanne render looks significantly more grounded in reality than the Renoir render. What the heck is up with those boxes that comprise the Zen 2 cores? So that's at least a step in the right direction even though the Cezanne render adds these weird overlapping boxes inside each L3 area.

And actually, the cache area is pretty much a mess in general. Taken literally, Cezanne's L3 takes up just about as much die area relative to the Zen 3 cores as Vermeer's does, like it has 7/8ths the L3 instead of 1/2.

Seriously, whoever at AMD decided to move away from beautiful and mesmerizing die shots to half-assed generic renders... you did bad. It's a huge step backwards for marketing and PR, and it's not as if you're actually improving secrecy all that much either.
 
Last edited:
  • Like
Reactions: Zepp and Tlh97