Question Apple A15 announced

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

jpiniero

Lifer
Oct 1, 2010
14,585
5,209
136
Seems like the CPU or GPU is not much faster than A14. Might be due to wanting lower power draw for better battery life.

The NPU did get a bump, 15.8 vs 11.8 tops.

Edit: The Pro does get an increase to 5 GPU cores from 4. Might be useful because of the 120 Hz VRR they added.
 
Last edited:

roger_k

Member
Sep 23, 2021
36
61
61
All your numbers are incorrect. I could post some real numbers (I have access to multiple systems), but instead I will point you to AnandTech: https://www.anandtech.com/bench/product/2687?vs=2633

Where do these numbers come from? I can't find any review of Lenovo Yoga Slim 7 on Anandtech. And they are clearly nonsensical.

Here is a review of a NUC with 4800U: https://www.anandtech.com/show/16236/asrock-4x4-box4800u-renoir-nuc-review/11
And here is Lenovo Yoga Slim 7 on notebookcheck: https://www.notebookcheck.net/The-R...vo-Yoga-Slim-7-14-Laptop-Review.456068.0.html

Both show total system power at ~ 60W under load. The Mac mini figure on your graph is for the entire machine. For SoC alone see Andrei's tweets, which show that M1 never exceeds 5W on a single core and draws at most 20-25 watts in most demanding benchmarks (this is also consistent with my own testing):


Power consumption of Zen3 cores (20W per core peak): https://www.anandtech.com/show/1621...e-review-5950x-5900x-5800x-and-5700x-tested/8

Power consumption of Tiger Lake cores (from Andrei), this has also nice graphs comparing it to 4800U: https://www.anandtech.com/show/16084/intel-tiger-lake-review-deep-dive-core-11th-gen/7

In-depth discussion (with active participation from Andrei): https://www.realworldtech.com/forum/?threadid=195334&curpostid=195334

The conclusion of all this is that the data is there, but can be quite difficult to find it. I am aware of these things because I actively follow up the expert discussions as they happen. But it cannot be expected from everybody.

I agree on all points except on the concept of turbo boost. I don't think it applies to the M1 since performance cores always run at the same clock speed when under load, so long as the SoC is cooled with a fan.
On intel CPUs, the "boost" clock is rarely used on all cores. It Apple were using turbo boost, we'd see the M1 reach, say, 3.4 GHz for brief amonts of time or when using a single core.

I know what you mean, but I think the fundamental mechanism is very similar. It's just that x86 CPUs have to be much more aggressive with opportunistic overclocking (or otherwise they would be 30% slower), while Apple can lean back. I mean, Firestorm "normal" clock range is 2.5ghz to 3.2ghz (with peak power consumption of only 5 watts). A premium modern x86 CPU instead runs at 2ghz-5ghz. In the end, it's about how aggressive the frequency curve is. Intel in particular tries to utilize every gap in the thermal envelope it can get, Apple instead puts the ceiling very low. Because they can afford it.

Btw, this is going to be much worse for Alder Lake. I think it's funny that some people praise Intel for fixing their sustained throughput problems. To me, Alder Lake is an admission that Intel simply cannot make a high-performance x86 core that wouldn't suck watts like an old truck engine, so they have to supplement it with cores specialized on throughput.
 

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136

Just because AMD marketing chose to clock Zen3 @ 5Ghz 1.5V, where it chews 20-25W means nothing about how efficient actual cores are.
I did a very unscientific test of pinning CB23 to one core on static OC 4.4Ghz 5950x -> average power consumption for that core was 3.9W as shown in HWInfo average and CB23 score was 1467pts. So very comparable to what native M1 is scoring at very comparable power usage per core.

That pretty much invalidates everything about "opportunistic overclocking" and so on, ZEN3 is just incredibly efficient when properly clocked and fed voltage for those clocks.

EDIT: Due to affinity setting problem with CB23, my results are off by 2W => look at https://forums.anandtech.com/threads/apple-a15-announced.2597187/post-40596675 sorry
 
Last edited:
  • Like
Reactions: Tlh97 and moinmoin

insertcarehere

Senior member
Jan 17, 2013
639
607
136
Just because AMD marketing chose to clock Zen3 @ 5Ghz 1.5V, where it chews 20-25W means nothing about how efficient actual cores are.
I did a very unscientific test of pinning CB23 to one core on static OC 4.4Ghz 5950x -> average power consumption for that core was 3.9W as shown in HWInfo average and CB23 score was 1467pts. So very comparable to what native M1 is scoring at very comparable power usage per core.

That pretty much invalidates everything about "opportunistic overclocking" and so on, ZEN3 is just incredibly efficient when properly clocked and fed voltage for those clocks.

A 5980HS is as voltage-optimized as a Zen 3 core gets and it gets nowhere near 4.4ghz when restricted to ~4w per core:
Power-Agi-5980HS-Perf.png

Power-P95-5980HS.png

~3.5-3.6 ghz is a much more realistic assessment for Zen 3 cores at those power levels, and thats not close to a 3ghz m1, especially as this is a mobile part with reduced L3 cache and therefore IPC.
 
  • Like
Reactions: Viknet

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
A 5980HS is as voltage-optimized as a Zen 3 core gets and it gets nowhere near 4.4ghz when restricted to ~4w per core:

Can't really take whole chip power, divide it by 8 and claim ~4w per core even on mobile chip with no IOD. CB23 is also a "light" load compared to Prime95.
At least with CB23 we can compare the "performance", how do we do the same with Prime95? What if ZEN core is pumping 80GFlops for 6W and M1 is 40 for 5W ?

~3.5-3.6 ghz is a much more realistic assessment for Zen 3 cores at those power levels, and thats not close to a 3ghz m1, especially as this is a mobile part with reduced L3 cache and therefore IPC.

There is huge gap between claims i was responding to that x86 is using 20W cause they must push it that way and us trying to split several watts per core in these 3 forum posts.
Btw M1 is not 3.7W in other workloads either, if we allow same ~5W, ZEN3 can be streched forward.
 
  • Like
Reactions: coercitiv

insertcarehere

Senior member
Jan 17, 2013
639
607
136
There is huge gap between claims i was responding to that x86 is using 20W cause they must push it that way and us trying to split several watts per core in these 3 forum posts.
Btw M1 is not 3.7W in other workloads either, if we allow same ~5W, ZEN3 can be streched forward.

No, Zen 3 doesn't use 20w per core when frequencies are slightly more sane, but Anandtech has measured powerdraw across a range of frequencies and it sure doesn't get close to your claims either:
PerCore-1-5950X.png

PerCore-2-5900X.png
 
  • Like
Reactions: Viknet

moinmoin

Diamond Member
Jun 1, 2017
4,944
7,656
136
No, Zen 3 doesn't use 20w per core when frequencies are slightly more sane, but Anandtech has measured powerdraw across a range of frequencies and it sure doesn't get close to your claims either:
PerCore-1-5950X.png

PerCore-2-5900X.png
That's not measurements across frequencies but measurements across maxed out cores. The difference between these and M1 is that the latter still is a mobile-oriented chip that maxes out at a somewhat sane efficiency point while these are essentially open end desktop chips that are designed to max out at the highest possible performance, efficiency be damned.
 
  • Like
Reactions: ryan20fun and Tlh97

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
No, Zen 3 doesn't use 20w per core when frequencies are slightly more sane, but Anandtech has measured powerdraw across a range of frequencies and it sure doesn't get close to your claims either:

OK, let's scratch my testing results, it seems CB23 is adjusting affinity on test run and that lowers indicated power usage on the core. I was wondering why windows affinity does not match core order in HWInfo64, and that is cause it is getting adjusted.

The "real" data is in, and while not as good for ZEN3 it is still far from 20W.
1632663469958.png

5.6W for 4.4Ghz and 1450 score.
 

roger_k

Member
Sep 23, 2021
36
61
61
Just because AMD marketing chose to clock Zen3 @ 5Ghz 1.5V, where it chews 20-25W means nothing about how efficient actual cores are.
I did a very unscientific test of pinning CB23 to one core on static OC 4.4Ghz 5950x -> average power consumption for that core was 3.9W as shown in HWInfo average and CB23 score was 1467pts. So very comparable to what native M1 is scoring at very comparable power usage per core.

That pretty much invalidates everything about "opportunistic overclocking" and so on, ZEN3 is just incredibly efficient when properly clocked and fed voltage for those clocks.

EDIT: Due to affinity setting problem with CB23, my results are off by 2W => look at https://forums.anandtech.com/threads/apple-a15-announced.2597187/post-40596675 sorry

I find your results surprising because they seem to directly contradict other reports. For example, in Anandtech tests Zen3 was closer to 10 watts running at ~4ghz. Are you sure this is 100% CPU core utilization? Can you verify by running a simple power virus type of workload? How about other benchmarks? It has been pointed out that CB is problematic for a bunch of reasons…

Edit: sorry, only now saw your other post. So your result suggests that in CB23 M1 uses around 40% less power to deliver comparable performance. I‘m still surprised that the AMD chip would use less than 6 watts… can you track CPU utilization? I couldn’t infer it from the picture, sorry if it was mentioned. I have this nagging suspicion that CB underutilizes the CPU, which would also explain why it scales so well with SMT.

P.S. The weird thing is that 5800U (which has a max turbo of 4.4ghz by itself) is slower than M1 in other popular benchmarks. Unfortunately, nobody seems to track the power consumption, so I can’t find any other data…
 
Last edited:
  • Like
Reactions: Viknet and Tlh97

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
I‘m still surprised that the AMD chip would use less than 6 watts… can you track CPU utilization? I couldn’t infer it from the picture, sorry if it was mentioned. I have this nagging suspicion that CB underutilizes the CPU, which would also explain why it scales so well with SMT.

CPU utilization is 100% for that core. And CB is really a "special" workload that underutilizes both M1 and Zen3, that's why power usage is comparable.
The sub-6W is no suprise to me, i think i could get it even lower by dropping clocks and tuning voltage further down.
The thing with "stock" configuration, is that on even golden chip AMD still needs to add massive voltage safety margins - they have no idea how good and responsive MB VRM are, there is reference load line calibration, but they have to account for it and so on. Then there is also safety margin for chip degradation over years etc. It all results in chip being fed safe, but way higher than required voltage.
That's where undervolting steps in and erases that margin to achieve power saving, while keeping things (hopefully) stable with stress tests. There are obviuosly HUGE gains area for ST, cause AMD marketing is hunting big clocks and don't mind to feed core with 1.5V to achieve 10% more performance vs 4.4Ghz i run at the cost of 4-5x power consumption per core.

M1 does not have all that. Apple is building known setup of VRMs, power delivery to tight spec and they can shave quite some margins everywhere. Combined with better process, architecture tailored for 3Ghz clocks they have superior CPU, it's just that superiority is not " multiple times" but rather 50% or so to the best of my estimates.

P.S. The weird thing is that 5800U (which has a max turbo of 4.4ghz by itself) is slower than M1 in other popular benchmarks.

M1 is a beast in integer workloads, eye watering performance in stuff like web browsers due to amazing architecture combined with very capable platform overall.
 

jeanlain

Member
Oct 26, 2020
149
122
86
Combined with better process, architecture tailored for 3Ghz clocks they have superior CPU, it's just that superiority is not " multiple times" but rather 50% or so to the best of my estimates.
Sure, you can down clock a ryzen to get the same cinebench scores as an M1 and save a lot of power.
However cinebench preforms rather poorly on Apple Silicon and does not reflect results of accepted benchmarks like SPECS. After all, it's not an industry standard and it does not claim to be platform agnostic. Who knows what X86 optimisations it has? It's been developed for that ISA over years, and it has only been ported to ARM in 2020.
One could underclock and M1 to match the score of a ryzen in certain tasks (for instance, geekbench tasks), but is it relevant?
What's the power consumption of a 5950X core during geekbench and cinebench ST at stock frequency?
 
Last edited:

roger_k

Member
Sep 23, 2021
36
61
61
CPU utilization is 100% for that core. And CB is really a "special" workload that underutilizes both M1 and Zen3, that's why power usage is comparable.
The sub-6W is no suprise to me, i think i could get it even lower by dropping clocks and tuning voltage further down.
The thing with "stock" configuration, is that on even golden chip AMD still needs to add massive voltage safety margins - they have no idea how good and responsive MB VRM are, there is reference load line calibration, but they have to account for it and so on. Then there is also safety margin for chip degradation over years etc. It all results in chip being fed safe, but way higher than required voltage.
That's where undervolting steps in and erases that margin to achieve power saving, while keeping things (hopefully) stable with stress tests. There are obviuosly HUGE gains area for ST, cause AMD marketing is hunting big clocks and don't mind to feed core with 1.5V to achieve 10% more performance vs 4.4Ghz i run at the cost of 4-5x power consumption per core.

M1 does not have all that. Apple is building known setup of VRMs, power delivery to tight spec and they can shave quite some margins everywhere. Combined with better process, architecture tailored for 3Ghz clocks they have superior CPU, it's just that superiority is not " multiple times" but rather 50% or so to the best of my estimates.


M1 is a beast in integer workloads, eye watering performance in stuff like web browsers due to amazing architecture combined with very capable platform overall.

Thanks for this, it's really very interesting! I think your experiment really drives home the message that chip "efficiency" cannot be described by a single number, but is a function of configuration (frequency) and workload. It's a shame that the industry does not have a standard way of testing and reporting these things (but then again, maybe they don't want it, since it would be bad for marketing)...
 
  • Like
Reactions: ryan20fun and Tlh97

itsmydamnation

Platinum Member
Feb 6, 2011
2,765
3,131
136
Thanks for this, it's really very interesting! I think your experiment really drives home the message that chip "efficiency" cannot be described by a single number, but is a function of configuration (frequency) and workload. It's a shame that the industry does not have a standard way of testing and reporting these things (but then again, maybe they don't want it, since it would be bad for marketing)...
why, run my CPU at 1 mhz, have 1gb of L1+L2+L3 that i run at 4ghz and DDR i run @ 4800. Now i have the highest IPC most efficiency cpu core in the world.

What actually matters is viable configurations in target specifications and nothing more.
 
  • Like
Reactions: NTMBK

roger_k

Member
Sep 23, 2021
36
61
61

jeanlain

Member
Oct 26, 2020
149
122
86
Thanks for this, it's really very interesting! I think your experiment really drives home the message that chip "efficiency" cannot be described by a single number, but is a function of configuration (frequency) and workload.
That's why Nuvia showed power consumption as a function of clock frequency, for different CPUs, and that Apple also showed it this way (albeit, with a dummy graph, which I can no longer find).
 

roger_k

Member
Sep 23, 2021
36
61
61
That's why Nuvia showed power consumption as a function of clock frequency, for different CPUs, and that Apple also showed it this way (albeit, with a dummy graph, which I can no longer find).

The problem as I see it that modern CPUs have a huge dynamic range of clock frequencies, with very different efficiency figures along that curve. And all the marketing is about obfuscating data and confusing the user. The worst part: the users seems ok with it, and even tech journalists don't mind.
 

Doug S

Platinum Member
Feb 8, 2020
2,254
3,485
136
The problem as I see it that modern CPUs have a huge dynamic range of clock frequencies, with very different efficiency figures along that curve. And all the marketing is about obfuscating data and confusing the user. The worst part: the users seems ok with it, and even tech journalists don't mind.


While marketing folks and fanboys can read a lot into things from CPUs that have such a wide dynamic range, it isn't for nothing that they're doing that.

Back in the old days (not the real old days, I'm talking 90s here) CPUs would always run at the same speed and burn pretty much the same power. When the system wasn't doing anything it just ran an idle loop which just repeatedly checked if there was anything to do millions of times a second.

The first power based optimization was adding a "halt" instruction that could basically stop the CPU until the next interrupt, and when operating systems implemented it systems that were mostly idle used less power. It was certainly a win for those early laptops!

Over time we've added ways for CPUs to run slower when there's less work, and run faster for a short time when there's more work. How much effort to put into saving power depends a lot on your power budget - Apple has had a lot more reason to worry about saving power as its CPUs were for phones first, while Intel's were desktop/server first though after P4 when they dropped the desktop/server focused P4 for a core originally designed exclusively for their laptop line they've tried to balance both sides.

If you balance both sides like Intel attempts to do your CPU will operate across the entire power curve, not just the lower end where Apple is forced to live but into the upper reaches where only overclockers used to dare tread. That makes it really hard to say "this core draws x watts" because when it is clocked down in scenarios where it has little to do it can draw very little power. Maybe not as little as Apple's big cores when doing a similar "little to do" but not terribly far away. It can also for short periods draw dozens of watts for a single core, which may be sustainable for a while if you only need the one core but you can't run all of them that way for long with any sort of reasonable cooling solution.

If you calculate performance by "who finishes first" you give the advantage to designs that allow cores to operate at the extreme end of the power curve. If you calculate efficiency by "who completes task X with the least amount of power" you give the advantage to designs that force cores to operate at the nice flat part of the power curve (or better yet have small cores designed to perform less well in exchange for less power)

Unless you have a core that both finishes first AND uses less power to complete the task than its competition, you can't measure both at once. Even then, arguably, that core could either finish even MORE quickly by allowing it to operate higher on the power curve or be even MORE efficient by forcing to operate lower on the power curve.

Since such a "double win" is pretty rare, and even when it happens is not likely to be an advantage maintained for long, you're left trying to measure one thing while controlling multiple variables. Its like the Uncertainty Principle, you can't measure both performance and efficiency at the same time; the more you try to hold one constant the more difficult an honest proper measurement of the other becomes.
 
  • Like
Reactions: BorisTheBlade82

eek2121

Platinum Member
Aug 2, 2005
2,930
4,026
136
Just because AMD marketing chose to clock Zen3 @ 5Ghz 1.5V, where it chews 20-25W means nothing about how efficient actual cores are.
I did a very unscientific test of pinning CB23 to one core on static OC 4.4Ghz 5950x -> average power consumption for that core was 3.9W as shown in HWInfo average and CB23 score was 1467pts. So very comparable to what native M1 is scoring at very comparable power usage per core.

That pretty much invalidates everything about "opportunistic overclocking" and so on, ZEN3 is just incredibly efficient when properly clocked and fed voltage for those clocks.

EDIT: Due to affinity setting problem with CB23, my results are off by 2W => look at https://forums.anandtech.com/threads/apple-a15-announced.2597187/post-40596675 sorry

My 5950x uses an average of 12.5W per core in heavy workloads including benchmarks.

Without hyperthreading: (Running only on core 0 thread 0), Prime95 can push the core up to around 19.3W at 4.65GHz (sustained). Disabling AVX2 in Prime95 drops the max usage to 17W (4.65GHz), and disabling AVX drops the max usage to 16.5W (4.65 GHz).

With hyperthreading: AVX2 - 22.4W, 4.65 GHz clocks, AVX - 22.3W, No AVX - 19.7W. There was a brief spike to 23.4W, but it quickly dropped back down.

I tested this across a few different cores. I'm going out of town for a while, so I won't have time to complete further testing, but IMO it is Apples and Oranges. The two chips are designed around 2 different power targets and for very different workloads. You could cap a Zen 3 chip to 9W and it would still perform great. Remember that the M1 only has 4 chips that will clock to 3+ GHz. The rest do not (in the M1 they are around 2.06 GHz). If/when Apple makes a chip with 8 big cores, the chip would consume significantly more power. This is why it's important to look at the SPEC multithreaded workloads over the single threaded workloads. Hybrid chips can look really amazing and high performing at the outside, but actually be not so great. That isn't a knock on hybrid chips, but rather, a reminder to those who claim that Apple is somehow significantly ahead of Intel/AMD. A 15W Zen 2 chip has better multi-core performance in SPECint than the M1. A 15W Zen3 core has better performance in SPECfp and SPECint than the M1. The M1 is on the N5 process, so it's not surprising perf/watt is better. Unfortunately, AMD is not launching N5 based mobile parts for a while, so we don't get to see what AMD mobile performance looks like. The most interesting comparison will be some of the Intel Alder Lake SKUs.
 

roger_k

Member
Sep 23, 2021
36
61
61
If/when Apple makes a chip with 8 big cores, the chip would consume significantly more power.

Yes it will, but it will still be maxed out at 5W per core (assuming same architecture and same peak clock frequency).

A 15W Zen 2 chip has better multi-core performance in SPECint than the M1. A 15W Zen3 core has better performance in SPECfp and SPECint than the M1.

Except that 15W Zen chip is not consuming 15W. It will consume 30-40 watts for the first ~2 minutes of running the workload and will settle down at 15W after approximately 3 minutes. I have linked the relevant Anandtech article before, and I will link the power consumption graph directly here:

Power%20-%2015W%20Comp%20yCr_575px.png


This is Zen2, but Zen3 behaves very similarly from what we have seen so far.

You are discussing SPEC results as if you were comparing a CPU cluster running at 15W to a CPU cluster running at 15W, but in actuality you are comparing a CPU cluster running at 20-25W to a CPU cluster running at 30-40W.
 
  • Like
Reactions: Tlh97

coercitiv

Diamond Member
Jan 24, 2014
6,187
11,859
136
Except that 15W Zen chip is not consuming 15W. It will consume 30-40 watts for the first ~2 minutes of running the workload and will settle down at 15W after approximately 3 minutes.
You are discussing SPEC results as if you were comparing a CPU cluster running at 15W to a CPU cluster running at 15W, but in actuality you are comparing a CPU cluster running at 20-25W to a CPU cluster running at 30-40W.
SPEC is a benchmark that takes 6+ hours to complete on a low TDP CPU, the first two minutes benefiting from boost are irrelevant.
 

roger_k

Member
Sep 23, 2021
36
61
61
SPEC is a benchmark that takes 6+ hours to complete on a low TDP CPU, the first two minutes benefiting from boost are irrelevant.

Is that 6 hours of continuous 100% CPU activity or are we talking about short repeated benchmark sequences with pauses that could reduce the thermal load and allow the CPU maintain it's higher power state for longer?

For example, according to Anandtech, a 5980HS running at 15W has only marginally lower SPEC2017 score than the same CPU running at 35W (~7% to be more precise). I mean, we all know that Zen3 is very efficient at lower clock frequencies, but do we really want to say that 15W is the optimum and anything beyond it is just diminishing returns? Besides, note how a six-core desktop Zen3 effortlessly overtakes the 8-core mobile chips by rising the power ceiling to 65W. So somehow upping the available per-core power by ~2x (15W mode to 35W mode) results in only 7% improvement, but upping the available per-core power by ~2x (35W mode to 65W mode), while simultaneously reducing the core count by 1/4 results in a 23% improvement. I don't know about you, but to me the only way to make sense of these results (taking into account the diminishing returns from increasing the power) is that these TDP targets do not actually represent the actual power draw of the chips during the test. So either the 35W config consumes much less power than 35W or the 15W config consumes more power than 15W. Given what we know about Zen3 power consumption from other tests, I'd say it's the later.

P.S. I would really love someone to test modern Intel and AMD chips by capping their power levels to 15W and seeing how they perform. The way how turbos work on modern x86 makes is extremely difficult to reason about the performance per watt of these chips.

1632920817981.png
 
  • Like
Reactions: Thibsie

dmens

Platinum Member
Mar 18, 2005
2,271
917
136
P.S. I would really love someone to test modern Intel and AMD chips by capping their power levels to 15W and seeing how they perform. The way how turbos work on modern x86 makes is extremely difficult to reason about the performance per watt of these chips.

LOL. This kills the fanboy. Why do you think Intel harps on constantly about "AC performance"? It is so they can pretend their 15W chips can dish out the performance even while it sucks down twice as much. It is also how this guy can actually write something like:

A 15W Zen 2 chip has better multi-core performance in SPECint than the M1. A 15W Zen3 core has better performance in SPECfp and SPECint than the M1.

Pure cope.
 

Commodus

Diamond Member
Oct 9, 2004
9,210
6,809
136
All this nitpicking seems to gloss over the practical reality... M1-based Mac laptops last very long on battery and are ahead of comparable Intel/AMD chips in some benchmarks, even if they're not always ahead. That's pretty great for a first showing, and I'd expect M1X and M2 to build on that.

That and history suggests you underestimate Apple's CPU design chops at your peril. It started out by making merely competitive mobile chips with the A4 in the iPad and iPhone 4; it's now at the point its chips frequently outperform the following year's Snapdragon flagship, let alone the current year's. That's not a guarantee of supremacy in computers, but I'm sure Intel, AMD and Microsoft are at least a little worried that Macs might repeat that pattern and claim a clear lead.
 

roger_k

Member
Sep 23, 2021
36
61
61
Yes.

Here's a quote from Andrei F. on the subject:

Thank you, this is most interesting.

But now I am completely confused. Why are scores for the 15W and the 35W Cesanne this close? Why does 65W Vermeer perform significantly better? I feel like there is something essential missing...

P.S. Look at the difference in scores for the 15W and the 28W Tiger Lake — whopping 30%. That makes much more sense IMO.

All this nitpicking seems to gloss over the practical reality... M1-based Mac laptops last very long on battery and are ahead of comparable Intel/AMD chips in some benchmarks, even if they're not always ahead. That's pretty great for a first showing, and I'd expect M1X and M2 to build on that.

That and history suggests you underestimate Apple's CPU design chops at your peril. It started out by making merely competitive mobile chips with the A4 in the iPad and iPhone 4; it's now at the point its chips frequently outperform the following year's Snapdragon flagship, let alone the current year's. That's not a guarantee of supremacy in computers, but I'm sure Intel, AMD and Microsoft are at least a little worried that Macs might repeat that pattern and claim a clear lead.

Regardless of our personal preferences and taking sides, I just want to get to the truth: how energy efficient are various modern architectures and their configurations when doing computational work and how does that energy efficiency relate to their peak and the sustained performance. I mean, we are not talking ethics here, this kind of question should have an objective, empirical answer, right?

I think the story behind Apple CPUs is fairly straightforward. Tiger Lake is also fairly clear. But AMD story is just muddy. Something doesn't check out.
 
Last edited:
  • Like
Reactions: Tlh97

Hitman928

Diamond Member
Apr 15, 2012
5,245
7,793
136
Thank you, this is most interesting.

But now I am completely confused. Why are scores for the 15W and the 35W Cesanne this close? Why does 65W Vermeer perform significantly better? I feel like there is something essential missing...

P.S. Look at the difference in scores for the 15W and the 28W Tiger Lake — whopping 30%. That makes much more sense IMO.

I'm not going to claim to have all facts/data on this, but in terms of TGL vs Zen3, the Intel process/architecture really struggles to get down to 15W. I don't know if it is that the process is just leakier or the architecture is just much more power hungry (probably a bit of both), but Cezanne's lower end of the freq/power graph is much more forgiving than TGL's. The flip side of that is that TGL also scales better at the higher end of the power curve. In other words, Cezanne is much more tuned for lower power operation than TGL and hits large diminishing returns much sooner than TGL does as you scale up the power.

Edit: I'm not saying this entirely explains what you are pointing out, but it is a big part of it. Part of it is probably also how each subtest scales with cores and the load they put on the CPU. Certain loads that use 8 cores can be less power intensive than others that use 8 cores and as such allow more liberal frequencies. We'd need to see more detailed breakdowns of the individual tests to know for sure.
 
  • Like
Reactions: Tlh97