Question AMD Phoenix/Zen 4 APU Speculation and Discussion

Page 95 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Kaluan

Senior member
Jan 4, 2022
500
1,071
96
I doubt it's 500 MHz in your typical gaming load. By typical I mean your modern 3D games that are taxing on the GPU. And lower when idle or 2D/desktop minor workloads.
I think they mean 3D load frequency *floor* (think for example having 60fps frame capped eSports game, GPU will not clock lower than 500MHz even if it only needs 300 for that), you're thinking of frequency *ceiling* (fmax?)
So nothing to do with "typical" scenarios.

I think in this competitive discussion we forget that there will be 6 GB 3050 mobile which supposedly scores around 6500 PTS in TS.
At what power envelope? 3060 at 65W does < 6500.

Navi34, like it's predecessor, may take a year to make a showing. At which point it might as well be RDNA3+ based, on N4. Likely monolithic <100mm2 die. But will post some insane P/W values.
Tho at that point it may have to contend with Arc "B3xxM" as well.
 

scineram

Senior member
Nov 1, 2020
361
283
106
It looks like he has access to OEM reference data. What he originally posted was the old one and now he updated It.
Naming for PHX2 could be a placeholder or something.
It will be interesting to see what PHX2 will show us.
Interestingly MLID is now backtracking on his smol Phoenix leak due to this. But I still believe his more than this Zen 4c BS.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,836
3,664
136
A CPU that use 50W for 10s and then 30W for the next 10s to perform a bench has used 40W on average and that s its real TDP for this task, if you dont understand such basics you are not fit for technical discussions...
Again with the BS arguments. Neither AMD nor Intel consume power specified by the TDP for a single CB run.

Damn pro-AMD charlatan.
 

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
I was looking at the Dell implementation, which saves Z-Axis space but is NOT LPDDR.

So that implies to me, that the main point is the actual thing they implemented, not the thing they didn't.
Oh, then don't worry. Last I heard, there were like 4-ish form factors as part of the draft spec, including one specifically for LPDDR. Think Dell's is the largest. Hopefully someone publishes more details soon.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,836
3,664
136
From the 7945HX review it is clear that AMD's sub-optimal memory controller coupled with the chiplet architecture is hurting gaming performance, yet again.

V-Cache should have been reserved for mobile CPUs. Not only would the extra L3 help with the memory deficit, it would also lead to lower power consumption in gaming, allowing more headroom to the GPU.
 

Kaluan

Senior member
Jan 4, 2022
500
1,071
96
That RAM speed is only 4800 MHz, that's not enough even for Rembrandt!
The highest gain is in Assassin's Creed Valhalla +23%.
The lowest gain is in Shadow of the Tomb Rider +9%.
Give them 6400MHz LPDDR5 with 33% higher BW, then we can talk how good or bad 780M is.

BTW, I wonder If this is really a proper comparison.
We know memory is slow and likely with bad timings, but 680M has only 2200MHz and 780M has 2800MHz clock speed.
I don't think they have the same TDP limit.
AFAIK Phoenix also supports LPDDR5X-7500, some announced handhelds seemingly come with it, no idea on ultraportable laptops yet tho.

Judging by the frametime graph in those tests, which look way more spiky than 680M/RDNA2's.... 780M/iGPU RDNA3 is clearly still on early drivers in all these tests.

Also NBC's review is laughable. Only interesting tidbit is the Port Royal test (hybrid/ray tracing), where it's over 50% faster than last gen.

Seem the wait for fully fleshed out drivers and reviews of Phoenix IGP is still ongoing.


Here's 680M w/ DDR5-4800 vs LPDDR5-6400:

v2-11589ecdc9f013fbf79ef2d5476d8e39_r.jpg
v2-45c8f9ab314d2f40ea89b648b4cae198_r.jpg


And we're supposed to believe it matters LESS for 780M, when it's not even using natively supported 5600 in all of these so called reviews, let alone LPDDR5 or 5X. lol
 
Last edited:

tamz_msc

Diamond Member
Jan 5, 2017
3,836
3,664
136
Also, check this:

1683305231430.png

And this:

1683305327725.png

No bandwidth advantage is going to overcome a 1 GHz GPU clock speed deficit and a 1.3 GHz CPU clock speed deficit to give you the same FPS.
 

ryanjagtap

Member
Sep 25, 2021
109
130
96
Something is off. RDR2 was showing >60 FPS in the frame rate counter but gameplay was obviously sub 20 FPS.
Here's another reviewer testing the 7840U on the GPD Win Max 2. It's just RPCS3 PS3 emulation testing. He gets different FPS on RDR2, so something is really not right on ETA Prime's review.

 

Abwx

Lifer
Apr 2, 2011
11,096
3,765
136
Thread usage is binary - thread is at given time either busy or free. So one threaded core can't newer be only partially loaded. Core monitors that report those partial load just calculate load/idle ratio for given core.

This change nothing, if all threads are at 100% during 2.5% of the time we end at 2.5% CPU usage and power is 2.5% of the 25W max power available, what matters is the duty cycle over time, not that a thread is binary on an instantaneous fashion that may last 10ms every second.

Say a 8C/16T CPU that run at 2GHz/25W when all threads are fully loaded as basis for 100% CPU usage.

Excluding 25% SMT it will be at 80% when running at 2GHz with only 8T, power will go down to 20W.

At 4T/2GHz it will be at 40% and 10W, and at 4T/1.4GHz it will be at 28.5% and about 4W, wich is exactly what is occuring in the review i linked with Far Crime test.
 

Geven

Banned
May 15, 2023
55
26
51
You've made some great points, @Mopetar! The Switch does have its limitations due to the older Maxwell-era graphics technology, but it still manages to deliver enjoyable gaming experiences for many users. Nintendo's partnership with Nvidia and the use of DLSS technology is a significant factor in its favor. While FSR is an alternative, Nvidia's expertise and current lead in this area make it a valuable ally for Nintendo. I like your idea of customizing the SoC by removing RT or tensor cores to create a leaner, more efficient system. This could potentially give Nintendo an edge in performance without compromising too much on cost. Whether Nvidia would be willing to create custom silicon for Nintendo is an interesting question. The success of the Switch could be a motivating factor, but as you mentioned, such partnerships aren't very common.
 

adroc_thurston

Platinum Member
Jul 2, 2023
2,730
4,006
96
but it won't be the money printing machine for AMD that it was for Intel
But is will be.
There is serious non-Intel competition now
whomst.
Neoverse roadmap is low-key a joke and was funded with Masabuxx and ARM will inevitably jack licensing costs up to 11 once they IPO.
Ampere is lol.
Nuvia server product got killed by Qualcomm.
who else left?
Maybe Zen 5 is the second AMD messiah
It's just a really-really solid core in the age of no one else making really solid cores.
 

eek2121

Platinum Member
Aug 2, 2005
2,967
4,096
136
Yet 8540-50U doesn't even have this NPU.
Honestly, from my point totally worthless refresh.
At least we know that Strix Point will be 9*** series.

But If It comes with Win12, then I don't want to buy It.
I am really not interested in this AI stuff, I would rather deactivate It If possible.

I know.

That's assuming Dragon Range rebrand is not coming.

Full Strix Point will be 12C24T, cutdown will be 8C16T in my opinion. Maybe there will also be a 10C20T model.
As for IGP from 16CU down to 10-12CU.
The “4” in 8945 means Zen 4. All Zen 5 parts will use 5 as the third number (8955hx, for example). 9xxx will be next year.

People (including Intel) are hating on AMD for launching mixed generations of products, but I see nothing wrong with it.
 

Abwx

Lifer
Apr 2, 2011
11,096
3,765
136
There are still issues after running the game for more than 300 seconds, they're waiting for fixes

Guess that they have extracted most of the missing perfs, the FPS at 40 img/s are the same at 400 seconds than before 300s although there s a peak at 46 img/s when they start the run, but that could be just an artifact with low rendering at the start.

2x the perfs in respect of the previous gen seems to be what was expected from PHX2 given that RAM bandwith is only 62% higher while the CU count x frequency uplift is about 2x, look like they gained most of the possible perfs.
 
Last edited:

Abwx

Lifer
Apr 2, 2011
11,096
3,765
136
But the average clock is lower, so in practice it should perform more like last gen Zen 3, no?

At 3.55GHz it is supposed to perform like a Zen3@4GHz, and set apart a few 105W TDP/142W PPT desktop parts there was no Zen 3 based SKU that did boost to 4GHz with all cores, so the 3.55 max frequency is not an issue for mobile parts.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,836
3,664
136
The GPU in AMD s based laptop is set at a lower TDP than in the Intel counterfparts, wich render all your "explanations" above completely fantasmagoric, here the full review of the laptop :
It is losing to the RTX 4080. And no, AMD memory controller sucks with JEDEC timings - it is a proven fact.

Edit - Downvotes won't change the reality:

1678897763029.png
 
Last edited: