Question AMD Phoenix/Zen 4 APU Speculation and Discussion

Page 94 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Jul 27, 2020
16,600
10,592
106
Well sure, no company outside of Nokia under Elop is going to negligently kill its currently available products.
Why don't they try the strategy of keeping both on the market without discounting Zen 4? People who don't know any better and who just want a decent laptop will buy the Zen 4 one without being any the wiser. Same thing with Zen 4 desktop, though I can see people not wanting the 7900X(3D)/7950X(3D) at their current prices after Zen 5 launch. That's not a big deal, they should just try it and see what happens. The world isn't going to end.
 

NTMBK

Lifer
Nov 14, 2011
10,245
5,035
136
One thing I hate is that I suspect AMD is in no hurry to release client stuff as soon as it's ready because they wait for existing inventory to deplete to a certain level. They possibly try to be ambitious, create more client chips than the market wants at AMD's desired prices and then wait it out to sell those chips with discounts over the lifetime of the product, instead of giving them away to OEMs at dirtcheap prices like Intel.
That would be very wasteful. AMD would be spending money on warehouses to hold the unsold product, and more importantly they wasted TSMC wafer allocations that could have gone towards better selling or higher margin products.
 

Timorous

Golden Member
Oct 27, 2008
1,648
2,863
136
Since AMD sell APUs with the GPU turned off / defective is it possible for them to go the other way and sell parts with broken CPUs as a dGPU? If this is really around 3060m 60W tier performance that could make for a good 7400 tier dGPU, especially if they could make it such that when the CPU is fused off that L3 cache becomes Infinity Cache. It would save AMD designing, and building an N34 so might work out as a cheaper way to service that tier.
 

yuri69

Senior member
Jul 16, 2013
394
629
136
How will be the software for Phoenix's AIE written? Who will write the apps? What kind of mobile/APU-class apps need AIE?

It kinda sounds like dead silicon given AMD's APU market share.
 

Bigos

Member
Jun 2, 2019
131
295
136
The conclusion will be the same whatever the task or the active core count, best efficency in respect of battery life is when CPU power is equal to rest of system power, that s mathematicaly provable..

It is not true in general. It is only true in some designs.

Consider a design that uses very little power outside of the CPU, e.g. 1W. What if the CPU needs to run at 1GHz to use 1W but it can run up to 2GHz on the same voltage? Such a CPU will use 2W at 2GHz. Now you can run your CPU at 1GHz and the design will use 2W. Or you can run it at 2GHz and it will use 3W. Which one is more efficient?

(The above ignores the leakage which does not scale with frequency. I.e. such a CPU at 2GHz will use even less than 2W, depending on how much can be attributed to the leakage.)

Obviously, the frequency/voltage curve often means that you cannot double your frequency at only twice the power consumption. However, you cannot just say "it is mathematically provable that a design that uses half its power on the CPU is the most efficient". Or you can say it when you provide such a proof :)

There are far too many variables (frequency/voltage curve, leakage, ...) for there to be one single solution to this. It depends on the design.
 

Kaluan

Senior member
Jan 4, 2022
500
1,071
96
I doubt it's 500 MHz in your typical gaming load. By typical I mean your modern 3D games that are taxing on the GPU. And lower when idle or 2D/desktop minor workloads.
I think they mean 3D load frequency *floor* (think for example having 60fps frame capped eSports game, GPU will not clock lower than 500MHz even if it only needs 300 for that), you're thinking of frequency *ceiling* (fmax?)
So nothing to do with "typical" scenarios.

I think in this competitive discussion we forget that there will be 6 GB 3050 mobile which supposedly scores around 6500 PTS in TS.
At what power envelope? 3060 at 65W does < 6500.

Navi34, like it's predecessor, may take a year to make a showing. At which point it might as well be RDNA3+ based, on N4. Likely monolithic <100mm2 die. But will post some insane P/W values.
Tho at that point it may have to contend with Arc "B3xxM" as well.
 

scineram

Senior member
Nov 1, 2020
361
283
106
It looks like he has access to OEM reference data. What he originally posted was the old one and now he updated It.
Naming for PHX2 could be a placeholder or something.
It will be interesting to see what PHX2 will show us.
Interestingly MLID is now backtracking on his smol Phoenix leak due to this. But I still believe his more than this Zen 4c BS.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,825
3,654
136
A CPU that use 50W for 10s and then 30W for the next 10s to perform a bench has used 40W on average and that s its real TDP for this task, if you dont understand such basics you are not fit for technical discussions...
Again with the BS arguments. Neither AMD nor Intel consume power specified by the TDP for a single CB run.

Damn pro-AMD charlatan.
 

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
I was looking at the Dell implementation, which saves Z-Axis space but is NOT LPDDR.

So that implies to me, that the main point is the actual thing they implemented, not the thing they didn't.
Oh, then don't worry. Last I heard, there were like 4-ish form factors as part of the draft spec, including one specifically for LPDDR. Think Dell's is the largest. Hopefully someone publishes more details soon.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,825
3,654
136
From the 7945HX review it is clear that AMD's sub-optimal memory controller coupled with the chiplet architecture is hurting gaming performance, yet again.

V-Cache should have been reserved for mobile CPUs. Not only would the extra L3 help with the memory deficit, it would also lead to lower power consumption in gaming, allowing more headroom to the GPU.
 

Kaluan

Senior member
Jan 4, 2022
500
1,071
96
That RAM speed is only 4800 MHz, that's not enough even for Rembrandt!
The highest gain is in Assassin's Creed Valhalla +23%.
The lowest gain is in Shadow of the Tomb Rider +9%.
Give them 6400MHz LPDDR5 with 33% higher BW, then we can talk how good or bad 780M is.

BTW, I wonder If this is really a proper comparison.
We know memory is slow and likely with bad timings, but 680M has only 2200MHz and 780M has 2800MHz clock speed.
I don't think they have the same TDP limit.
AFAIK Phoenix also supports LPDDR5X-7500, some announced handhelds seemingly come with it, no idea on ultraportable laptops yet tho.

Judging by the frametime graph in those tests, which look way more spiky than 680M/RDNA2's.... 780M/iGPU RDNA3 is clearly still on early drivers in all these tests.

Also NBC's review is laughable. Only interesting tidbit is the Port Royal test (hybrid/ray tracing), where it's over 50% faster than last gen.

Seem the wait for fully fleshed out drivers and reviews of Phoenix IGP is still ongoing.


Here's 680M w/ DDR5-4800 vs LPDDR5-6400:

v2-11589ecdc9f013fbf79ef2d5476d8e39_r.jpg
v2-45c8f9ab314d2f40ea89b648b4cae198_r.jpg


And we're supposed to believe it matters LESS for 780M, when it's not even using natively supported 5600 in all of these so called reviews, let alone LPDDR5 or 5X. lol
 
Last edited:

tamz_msc

Diamond Member
Jan 5, 2017
3,825
3,654
136
Also, check this:

1683305231430.png

And this:

1683305327725.png

No bandwidth advantage is going to overcome a 1 GHz GPU clock speed deficit and a 1.3 GHz CPU clock speed deficit to give you the same FPS.
 

ryanjagtap

Member
Sep 25, 2021
108
127
96
Something is off. RDR2 was showing >60 FPS in the frame rate counter but gameplay was obviously sub 20 FPS.
Here's another reviewer testing the 7840U on the GPD Win Max 2. It's just RPCS3 PS3 emulation testing. He gets different FPS on RDR2, so something is really not right on ETA Prime's review.

 

Abwx

Lifer
Apr 2, 2011
11,030
3,665
136
Thread usage is binary - thread is at given time either busy or free. So one threaded core can't newer be only partially loaded. Core monitors that report those partial load just calculate load/idle ratio for given core.

This change nothing, if all threads are at 100% during 2.5% of the time we end at 2.5% CPU usage and power is 2.5% of the 25W max power available, what matters is the duty cycle over time, not that a thread is binary on an instantaneous fashion that may last 10ms every second.

Say a 8C/16T CPU that run at 2GHz/25W when all threads are fully loaded as basis for 100% CPU usage.

Excluding 25% SMT it will be at 80% when running at 2GHz with only 8T, power will go down to 20W.

At 4T/2GHz it will be at 40% and 10W, and at 4T/1.4GHz it will be at 28.5% and about 4W, wich is exactly what is occuring in the review i linked with Far Crime test.
 

Geven

Banned
May 15, 2023
55
26
51
You've made some great points, @Mopetar! The Switch does have its limitations due to the older Maxwell-era graphics technology, but it still manages to deliver enjoyable gaming experiences for many users. Nintendo's partnership with Nvidia and the use of DLSS technology is a significant factor in its favor. While FSR is an alternative, Nvidia's expertise and current lead in this area make it a valuable ally for Nintendo. I like your idea of customizing the SoC by removing RT or tensor cores to create a leaner, more efficient system. This could potentially give Nintendo an edge in performance without compromising too much on cost. Whether Nvidia would be willing to create custom silicon for Nintendo is an interesting question. The success of the Switch could be a motivating factor, but as you mentioned, such partnerships aren't very common.
 

adroc_thurston

Platinum Member
Jul 2, 2023
2,368
3,320
96
but it won't be the money printing machine for AMD that it was for Intel
But is will be.
There is serious non-Intel competition now
whomst.
Neoverse roadmap is low-key a joke and was funded with Masabuxx and ARM will inevitably jack licensing costs up to 11 once they IPO.
Ampere is lol.
Nuvia server product got killed by Qualcomm.
who else left?
Maybe Zen 5 is the second AMD messiah
It's just a really-really solid core in the age of no one else making really solid cores.
 

eek2121

Platinum Member
Aug 2, 2005
2,930
4,027
136
Yet 8540-50U doesn't even have this NPU.
Honestly, from my point totally worthless refresh.
At least we know that Strix Point will be 9*** series.

But If It comes with Win12, then I don't want to buy It.
I am really not interested in this AI stuff, I would rather deactivate It If possible.

I know.

That's assuming Dragon Range rebrand is not coming.

Full Strix Point will be 12C24T, cutdown will be 8C16T in my opinion. Maybe there will also be a 10C20T model.
As for IGP from 16CU down to 10-12CU.
The “4” in 8945 means Zen 4. All Zen 5 parts will use 5 as the third number (8955hx, for example). 9xxx will be next year.

People (including Intel) are hating on AMD for launching mixed generations of products, but I see nothing wrong with it.