Question Apple A15 announced

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

jpiniero

Lifer
Oct 1, 2010
14,578
5,203
136
Seems like the CPU or GPU is not much faster than A14. Might be due to wanting lower power draw for better battery life.

The NPU did get a bump, 15.8 vs 11.8 tops.

Edit: The Pro does get an increase to 5 GPU cores from 4. Might be useful because of the 120 Hz VRR they added.
 
Last edited:

eek2121

Platinum Member
Aug 2, 2005
2,930
4,024
136
Lots going on at Apple's SoC development department of late + Covid + loss of staff.... What I'm waiting for is the M1X (rumored name). Supposedly for the MacBook Pro and iMac Pro. Since one would likely want to use both for serious work, and even game on the iMac Pro - I am really interested in what is coming down the pike and whether or not they will use AMD mobile GFX modules (maybe with the ability to power off the discrete GPU and switch to the one on the SoC for to lower power consumption on the notebook).
Apple has made it very clear that future Macs will use Apple GPUs. If you go back and look at their announcement videos (on mobile, not going to find/link this), you can see exactly how clear (“crystal”) they made this point. I called it long ago and I am sticking to it: Apple is making a huge mistake by dumping x86. They should have switched to AMD, but I strongly suspect their contract with Intel kept them from doing so.

Apple will never have a fast enough machine to compel me to switch. My workload is on an Intel Mac, but the successor is an AMD machine running Linux. Apple can’t compete in raw CPU/GPU compute power. My AMD system narrowly trails in single core, but handily beats the M1 in multicore. The RTX 3070 handily walks all over all of Apple’s current GPUs. Sure, Apple is rumored to be releasing faster chips later, but so is Intel. Thermodynamics continue to show everyone who is boss, regardless of the company.

In the meantime, waiting on my iPhone. It arrives Friday. 😀
 
  • Like
Reactions: Arkaign

jeanlain

Member
Oct 26, 2020
149
122
86
due to the massivebrain drain
I suppose you're referring to some recent rumour about >100 engineers having left the SoC team at Apple. An ex AMD CPU designer posting @ macrumors says this rumour can't be true. He personally knows many CPU engineers working at Apple, and none of them has left.
 
Last edited:
  • Like
Reactions: Viknet and Ajay

roger_k

Member
Sep 23, 2021
36
61
61
Apple will never have a fast enough machine to compel me to switch. My workload is on an Intel Mac, but the successor is an AMD machine running Linux. Apple can’t compete in raw CPU/GPU compute power. My AMD system narrowly trails in single core, but handily beats the M1 in multicore. The RTX 3070 handily walks all over all of Apple’s current GPUs. Sure, Apple is rumored to be releasing faster chips later, but so is Intel. Thermodynamics continue to show everyone who is boss, regardless of the company.

Apple's GPU IP currently has 2-3x perf/watt lead over Nvidia and AMD. They deliver a GPU with 1024 ALUs and 2.6TFLOPS of theoretical peak throughput for 10 watts of power. This is not the kind of lead you can get from the process advantage alone. And let's not even talk about CPUs, where their perf/watt advantage is closer to 4x — while in the meantime Intel is adopting the BIG.little architecture with a lot of fanfare simply to hide the fact that they don't know how to make their CPUs go fast and energy efficient at the same time.

I have a feeling that your posts will not age well. It's quite interesting to me how folks have been consistently underestimating and dismissing Apple's hardware efforts. Coupe of years ago, the sentiment was all "Geekbench is useless and those ARM CPUs will never be as fast as x86 in real desktop workloads". Now that Apple has demonstrated a 4x perf-per-watt advantage with the same peak performance, it is "sure, it's ok for an ultrabook SoC, but it's not gonna scale". The funny thing is that the same people are praising AMD who's CPU strategy in the last years mostly boils down to maximizing the throughput at the cost of peak performance and obfuscating TDP figures. All while back in the real world Apple's mobile phone chip contains as much last level cache as AMD's 8-core desktop CCX...
 
Last edited:

roger_k

Member
Sep 23, 2021
36
61
61
Lots going on at Apple's SoC development department of late + Covid + loss of staff.... What I'm waiting for is the M1X (rumored name). Supposedly for the MacBook Pro and iMac Pro. Since one would likely want to use both for serious work, and even game on the iMac Pro - I am really interested in what is coming down the pike and whether or not they will use AMD mobile GFX modules (maybe with the ability to power off the discrete GPU and switch to the one on the SoC for to lower power consumption on the notebook).

Apple made it very clear that they will not use any third-party GPUs. Which makes perfect sense both from hardware side (Apple GPUs are going to be faster in the same TDP bracket) and the software side (Apple GPUs offer a unified, consistent GPU programming model that is not compatible with third-party hardware).
 

uzzi38

Platinum Member
Oct 16, 2019
2,613
5,848
146
Apple's GPU IP currently has 2-3x perf/watt lead over Nvidia and AMD. They deliver a GPU with 1024 ALUs and 2.6GFLOPS of theoretical peak throughput for 10 watts of power. This is not the kind of lead you can get from the process advantage alone.

I have a feeling that your posts will not age well. It's quite interesting to me how folks have been consistently underestimating and dismissing Apple's hardware efforts. Coupe of years ago, the sentiment was all "Geekbench is useless and those ARM CPUs will never be as fast as x86 in real desktop workloads". Now that Apple has demonstrated a 4x perf-per-watt advantage with the same peak performance, it is "sure, it's ok for an ultrabook SoC, but it's not gonna scale".
Hm? Only if you compare with crusty old Vega IP. You know, that IP that wasn't efficient even when it launched? Also, you're pretty significantly off with the peak throughput, because it's TFLOPs you should be talking about, not GFLOPs.

Van Gogh in the Steam Deck is about 1.6TFLOPs for about 10W GPU only, and that's with half the ALUs. Rembrandt (next gen APU) is 768ALUs, and with the same GPU power if my estimation's right should hold about 1.3GHz with GPU power locked at 10W, which gives you about 1.9TFLOPs.

And that's assuming it has identical V/f properties to my 6700XT, but realistically it should be safe to assume an extra 200-300MHz worth of clocks from additional optimisations and actual binning, something that doesn't take place with neither Van Gogh nor Navi22 silicon currently.

By which point you're looking at over 2.2TFLOPs, which actually is now actually in line with the 10-15% process improvement from N6 -> N5. Oh, and let me again remind you that Apple still holds an ALU advantage here, meaning they can clock their iGPU lower and get the same performance, which by nature brings an efficiency improvement on it's own.

So then, where does your 2-3x efficiency advantage come from?
 
Last edited:

roger_k

Member
Sep 23, 2021
36
61
61
Hm? Only if you compare with crusty old Vega IP. You know, that IP that wasn't efficient even when it launched? Also, you're pretty significantly off with the peak throughput, because it's TFLOPs you should be talking about, not GFLOPs.

I am talking about Navi 2, not Vega, using estimates across various GPUs (mostly mobile, since they are more likely to be binned for power efficiency).

Thanks for pointing out the typo!

Van Gogh in the Steam Deck is about 1.6TFLOPs for about 10W GPU only, and that's with half the ALUs. Rembrandt (next gen APU) is 768ALUs, and with the same GPU power if my estimation's right should hold about 1.3GHz with GPU power locked at 10W, which gives you about 1.9TFLOPs.

And that's assuming it has identical V/f properties to my 6700XT, but realistically it should be safe to assume an extra 200-300MHz worth of clocks from additional optimisations and actual binning, something that doesn't take place with neither Van Gogh nor Navi22 silicon currently.

By which point you're looking at over 2.2TFLOPs, which actually is now actually in line with the 10-15% process improvement from N6 -> N5.

Oh, and let me again remind you that Apple still holds an ALU advantage here, meaning they can clock their iGPU lower and get the same performance, which by nature brings an efficiency improvement on it's own.

So then, where does your 2-3x efficiency advantage come from?

All fair points. Direct comparison is difficult since AMD does not actually have a 16 CU Navi 2 part. I agree with you that by using highly binned parts Navi 2 might be able to hit around 2 TFLOPS. But that's the thing. Apple is shipping millions and millions of M1 units, which at least to me suggests that they are not cherry picking the best chips for this.
 

uzzi38

Platinum Member
Oct 16, 2019
2,613
5,848
146
I am talking about Navi 2, not Vega, using estimates across various GPUs (mostly mobile, since they are more likely to be binned for power efficiency).

Thanks for pointing out the typo!



All fair points. Direct comparison is difficult since AMD does not actually have a 16 CU Navi 2 part. I agree with you that by using highly binned parts Navi 2 might be able to hit around 2 TFLOPS. But that's the thing. Apple is shipping millions and millions of M1 units, which at least to me suggests that they are not cherry picking the best chips for this.
Oh in the case it wasn't clear, I think 2TFLOPs is pretty certain - I would expect an extra 100MHz at an absolute minimum even outside of binning but from the move to N6 alongside other improvements. It's another 100-200MHz above that that I singled out as being a maybe.
 

roger_k

Member
Sep 23, 2021
36
61
61
In the end, the proof is in the sipping :) Based on the currently available and shipping in volume hardware, I am fairly confident that Apple can build a 40-50 watt GPU with 10 TFLOPS of peak throughput. I am less confident that AMD can build such a GPU with under 70W TDP, even with very selective binning.
 

eek2121

Platinum Member
Aug 2, 2005
2,930
4,024
136
?
Their CPUs and GPUs beat intel's and AMD's in perf/W and their first low-power CPU is already a leader in single-core performance. It's not hard to image how that can scale up.
Apple's GPU IP currently has 2-3x perf/watt lead over Nvidia and AMD. They deliver a GPU with 1024 ALUs and 2.6TFLOPS of theoretical peak throughput for 10 watts of power. This is not the kind of lead you can get from the process advantage alone. And let's not even talk about CPUs, where their perf/watt advantage is closer to 4x — while in the meantime Intel is adopting the BIG.little architecture with a lot of fanfare simply to hide the fact that they don't know how to make their CPUs go fast and energy efficient at the same time.

I have a feeling that your posts will not age well. It's quite interesting to me how folks have been consistently underestimating and dismissing Apple's hardware efforts. Coupe of years ago, the sentiment was all "Geekbench is useless and those ARM CPUs will never be as fast as x86 in real desktop workloads". Now that Apple has demonstrated a 4x perf-per-watt advantage with the same peak performance, it is "sure, it's ok for an ultrabook SoC, but it's not gonna scale". The funny thing is that the same people are praising AMD who's CPU strategy in the last years mostly boils down to maximizing the throughput at the cost of peak performance and obfuscating TDP figures. All while back in the real world Apple's mobile phone chip contains as much last level cache as AMD's 8-core desktop CCX...

I don't think either of you get it. Most of Apple's "Advantage (not really)" comes from the fact that:
  1. They are on a superior node.
  2. They are using a hybrid "big.LITTLE" design.
  3. Apple and NVIDIA have not, until recently, prioritized mobile graphics.
  4. Apple doesn't have to support Vulkan, OpenGL, DirectX (so they can optimize their drivers and such around Metal)
Get back to me when Apple can scale their performance up to 8 cores and beyond. They'll impress me if they still maintain much of, if any supposed perf/watt advantage. Meanwhile AMD has a 15W and 45W Zen 3 chip that works great. AMD"s biggest issue is not moving mobile to 5nm fast enough. We won't see mobile 5nm Zen 4 designs before 2023, but it doesn't matter. My laptop performs just as well (better in some areas) as an M1 based Macbook. I don't care if it's rated for 45W. In the end, with overall system usage, it does not have a huge disadvantage in battery life, and it offers something you can't get from an Apple device: real graphics performance, solid multicore performance, and the ability to use CUDA (among other things).

It's easy to praise a company for what they HAVE done. Now Apple has a very difficult task of scaling that performance up the stack. I promise you that is no easy task. Note that I'm not claiming we won't see somewhat better performing parts, an "M1X" type chip with 8 big cores would probably beat an AMD Zen 3 chip, but once again, node advantages + AMD being on an old design...and the Zen chip allows you to run far more software, and use way more powerful GPUs. I also doubt they'll be able to beat Zen 4 in raw performance, and I wonder where Intel will fit into the whole thing with ADL-S. Seems like an ADL-S chip with 4+4 would be pretty close to Apple in terms of raw performance, if not perf/watt.
 
  • Like
Reactions: Arkaign

nxre

Member
Nov 19, 2020
60
103
66
If Apple designs couldn't scale they would never commit to replacing their entire lineup with their own silicon.
Wether they can remain competitive is another question, but they do have a flawless execution history. A15 seems to be their only miss in a decade, but it's also not surprising or alarming, given it seems like they reused the previous core because the new one wasn't ready for a September launch. If the CPU team gets behind schedule on AMD/Intel, they can just delay the launch. If Apple's CPU team gets behind schedule, they will still have to launch an iPhone in September, with or without a new CPU core. It would be more alarming if a new design was launched that got minimal/zero improvements.
 

jeanlain

Member
Oct 26, 2020
149
122
86
I don't think either of you get it. Most of Apple's "Advantage (not really)" comes from the fact that:
  1. They are on a superior node.
  2. They are using a hybrid "big.LITTLE" design.
  3. Apple and NVIDIA have not, until recently, prioritized mobile graphics.
  4. Apple doesn't have to support Vulkan, OpenGL, DirectX (so they can optimize their drivers and such around Metal)
The node alone does not explain Apple's lead. We're talking a 3X advantage in perf/W for the GPU.
The 4X single-thread perf/W lead in (as of October 2020) concerned the big cores, at max performance. Big.LITTLE doesn't change that.
Intel has been prioritizing mobile graphics forever. Yet the M1 is much faster than the intel Xe 96 CUs for optimised benchmark tools (that is, not those using openGL or some sort of emulation like rosetta or moltenVK) and consumes twice less. The same comparison can be made with mobile AMD GPUs. Rendering performance increases with the number of CUs providing memory bandwidth is not limiting. I don't see why Apple's lead in the mobile segment should not apply to the high-end segment.

I'm not sure what the 4th point has to do with your initial claim.
 
Last edited:

nxre

Member
Nov 19, 2020
60
103
66
The node advantage discussions always seem to miss the point that designs aren't done in a vacuum and then tacked onto whatever node is available. AMD designs are not designed around bleeding edge nodes, and none of their current designs could be produced in one because they have frequency/power requirements that are not achievable in a bleeding edge nodes. Unless TSMC is that good that they can viably ship 5Ghz designs on a bleeding edge node in its first year of mass production.
 

jeanlain

Member
Oct 26, 2020
149
122
86
Get back to me when Apple can scale their performance up to 8 cores and beyond. They'll impress me if they still maintain much of, if any supposed perf/watt advantage. Meanwhile AMD has a 15W and 45W Zen 3 chip that works great.
We should see in a month or so. I'd be shocked if Apple decided to use their own silicon without being certain that they could scale their architecture to more than 8 cores.
As for the competition, make sure to compare peak power consumption numbers. I suppose 15W is not the power consumed by all Ryzen cores at nominal frequency.
15-20W really is the max power that the M1 CPU consumes. This ship has no concept of "boost" clock, PL1, PL2 or whatever. It runs at max frequency all the time if it's not passively cooled. OTOH, an AMD/Intel GPU rated at 15W can consume >40W for brief amounts of time.
 
  • Like
Reactions: Viknet

Mopetar

Diamond Member
Jan 31, 2011
7,830
5,977
136
Why would you assume it has a long pipeline? It is quite obviously an A14 core that's clocked higher. That would explain the increased power use (the A15 may or may not use N5P, we assume so but maybe it wasn't ready in time?) and lack of any IPC improvement.

Does anyone seriously think they did a brand new core that somehow got EXACTLY 0% IPC improvement? What are the odds of that?

The silliest part of that line of thinking is that the clock speed barely increased, so not only would they have changed designs for no IPC gain, but wouldn't have even gotten significant clock speed increases from it either. I think it should have been pretty obvious to anyone looking at the numbers we had that they just reused the same core and bumped up the clocks a little bit.
 

Commodus

Diamond Member
Oct 9, 2004
9,210
6,809
136
The A15 is definitely a modest update, but I also don't think it says much about what M1X or M2 will be like.

I'd rather look at Apple's long-term history. I've said it before, but it bears repeating: Apple started with a chip that was merely competitive with the latest Snapdragons (the A4) and is now at the point where its CPU typically outperforms next year's Snapdragon. You underestimate Apple's consistent iteration at your own peril.
 

Mopetar

Diamond Member
Jan 31, 2011
7,830
5,977
136
We should be seeing M1X or M2 in about a month just based on Apple's usual schedule. Of course who knows how much of a spammer the pandemic through into their plans.

I'd like to see an updated core design as part of their next SoC, but it's hard to imagine that if there were troubles that prevented an update for the A15 how that wouldn't also apply to an M-series chip being released a month later.

The only scenario that comes to mind is where Apple decided to focus a lot of time on the next M-series chip to make a different core for their Macs while just doing a refresh with the A15. There's a compelling argument for them to have a single core, but the realities of the desktop and notebook markets are different than the mobile world.

A cores designed primarily for phones probably won't scale as well as Apple would like for their other products. The can certainly use the mobile core for products like the MacBook Air where low power is preferable, but the Pro is going to want to trade efficiency for power to a degree.
 

Doug S

Platinum Member
Feb 8, 2020
2,248
3,476
136
I find it hilarious that people are still seriously claiming that Apple's decision to switch to ARM was a bad idea. Especially based on some strange belief system that claims with no evidence beyond some sort of gut feeling or wishful thinking that they will be unable to scale their hardware to larger machines.

I guess that belief system holds that Apple decided to switch to ARM because they could save a few bucks, without doing any internal development and proof of concept to know how it compared to x86 performance both natively and under emulation - and THEN announced they'd be converting the entire line without knowing whether they'd be able to produce anything that covers their high end.

Apple further made it clear they'd be using their own GPU and not third party GPUs, and that was all on a wing and a prayer too without any internal development and testing. I'm sure when they said that they had NO idea if they could scale their GPU up to workstation GPU levels of performance - after all, they are severely limited by having only 12 digits of cash on hand so clearly they couldn't afford to find out ahead of time!

This reminds me of the skepticism from some when rumors of Apple switching Macs to ARM were discussed on and off for years. People who had a clue saw how quickly they were increasing performance of their ARM SoCs and could see if that continued they'd be able to match x86 performance levels. People without a clue came up with all sorts of spurious reasons why such an attempt would fail, everything from claims that ARM is somehow only suitable for "phone workloads" (whatever those are) to claims that developers would be unwilling to port and Apple would be stuck running x86 emulation forever.

I see the spurious reasons are still coming hot and fast for a few holdouts, now including the even more ridiculous claim that because AMD makes systems that beat M1 on multicore that Apple can't ever meet or exceed it. Complete with made up ideas that "Apple would have switched to AMD but they had a secret contract clause with Intel that prevented it". Plus throw in a few ready made excuses in case Apple does beat AMD by talking about how Apple is ahead on process and only has to design for Metal and not DirectX, so when it happens claims can be made that "technically Apple isn't faster, it is only because of the process and cheating by using Metal".

I would have thought living in such a contrived fantasy world could only happen in the realm of partisan politics. But maybe Apple v Linux/Microsoft, AMD v Intel or x86 v ARM is "partisan politics" for some.
 

StinkyPinky

Diamond Member
Jul 6, 2002
6,763
783
126
I expect the M1X/M2 to be significantly more efficient than the i7/i9 it is replacing, but the performance is more uncertain. Especially on the GPU side.

I have to admit about being excited to see the benchmarks though, this could be revolutionary if it scales well. That's the big unknown to me.
 

roger_k

Member
Sep 23, 2021
36
61
61
We should see in a month or so. I'd be shocked if Apple decided to use their own silicon without being certain that they could scale their architecture to more than 8 cores.
As for the competition, make sure to compare peak power consumption numbers. I suppose 15W is not the power consumed by all Ryzen cores at nominal frequency.
15-20W really is the max power that the M1 CPU consumes. This ship has no concept of "boost" clock, PL1, PL2 or whatever. It runs at max frequency all the time if it's not passively cooled. OTOH, an AMD/Intel GPU rated at 15W can consume >40W for brief amounts of time.

Just a quick comment in the interest of clarity:

- Apple CPUs do have turbo boost and thermal limits, but their frequency and power consumption range is much lower compared to x86 chips
- Apple Firestorm core peaks at around 5W, Zen3 and Tiger Lake at around 20W - for comparable peak performance
- AMD 15W TDP is not the maximum power consumption, it’s basically an arbitrary number that the system should eventually stabilize at. AMDs low power CPUs perform that well because they are actually running at 30-50 watts for minutes before throttling down to 15W. It becomes very obvious when comparing their 15W CPUs and their 35W CPUs - they score virtually the same in multicore benchmarks. This does make 15W AMD CPUs a good bang for buck when you are after throughtput, just don’t make the mistake believing they run at 15W…
 
  • Like
Reactions: Viknet

eek2121

Platinum Member
Aug 2, 2005
2,930
4,024
136
I find it hilarious that people are still seriously claiming that Apple's decision to switch to ARM was a bad idea. Especially based on some strange belief system that claims with no evidence beyond some sort of gut feeling or wishful thinking that they will be unable to scale their hardware to larger machines.

I guess that belief system holds that Apple decided to switch to ARM because they could save a few bucks, without doing any internal development and proof of concept to know how it compared to x86 performance both natively and under emulation - and THEN announced they'd be converting the entire line without knowing whether they'd be able to produce anything that covers their high end.

Apple further made it clear they'd be using their own GPU and not third party GPUs, and that was all on a wing and a prayer too without any internal development and testing. I'm sure when they said that they had NO idea if they could scale their GPU up to workstation GPU levels of performance - after all, they are severely limited by having only 12 digits of cash on hand so clearly they couldn't afford to find out ahead of time!

This reminds me of the skepticism from some when rumors of Apple switching Macs to ARM were discussed on and off for years. People who had a clue saw how quickly they were increasing performance of their ARM SoCs and could see if that continued they'd be able to match x86 performance levels. People without a clue came up with all sorts of spurious reasons why such an attempt would fail, everything from claims that ARM is somehow only suitable for "phone workloads" (whatever those are) to claims that developers would be unwilling to port and Apple would be stuck running x86 emulation forever.

I see the spurious reasons are still coming hot and fast for a few holdouts, now including the even more ridiculous claim that because AMD makes systems that beat M1 on multicore that Apple can't ever meet or exceed it. Complete with made up ideas that "Apple would have switched to AMD but they had a secret contract clause with Intel that prevented it". Plus throw in a few ready made excuses in case Apple does beat AMD by talking about how Apple is ahead on process and only has to design for Metal and not DirectX, so when it happens claims can be made that "technically Apple isn't faster, it is only because of the process and cheating by using Metal".

I would have thought living in such a contrived fantasy world could only happen in the realm of partisan politics. But maybe Apple v Linux/Microsoft, AMD v Intel or x86 v ARM is "partisan politics" for some.
You clearly don’t know or understand Apple. I owned a Mac 512k (and owned numerous models between then and now), that is how far back I go with Apple. For modern Apple, the priorities are as follows:

  1. Profit margins
  2. Platform Control (enabling SaaS, which further lead to #1)
  3. User experience
  4. Performance
Just a quick comment in the interest of clarity:

- Apple CPUs do have turbo boost and thermal limits, but their frequency and power consumption range is much lower compared to x86 chips
- Apple Firestorm core peaks at around 5W, Zen3 and Tiger Lake at around 20W - for comparable peak performance
- AMD 15W TDP is not the maximum power consumption, it’s basically an arbitrary number that the system should eventually stabilize at. AMDs low power CPUs perform that well because they are actually running at 30-50 watts for minutes before throttling down to 15W. It becomes very obvious when comparing their 15W CPUs and their 35W CPUs - they score virtually the same in multicore benchmarks. This does make 15W AMD CPUs a good bang for buck when you are after throughtput, just don’t make the mistake believing they run at 15W…

All your numbers are incorrect. I could post some real numbers (I have access to multiple systems), but instead I will point you to AnandTech: https://www.anandtech.com/bench/product/2687?vs=2633

EDIT: That is comparing to Zen 2 because AT has not reviewed Zen 3 parts as of yet. The power consumption for Zen 3 is similar, with increased performance.

Also see this for Mac Mini testing: https://www.anandtech.com/show/16252/mac-mini-apple-m1-tested
 
Last edited:

jeanlain

Member
Oct 26, 2020
149
122
86
Just a quick comment in the interest of clarity:

- Apple CPUs do have turbo boost and thermal limits, but their frequency and power consumption range is much lower compared to x86 chips
- Apple Firestorm core peaks at around 5W, Zen3 and Tiger Lake at around 20W - for comparable peak performance
- AMD 15W TDP is not the maximum power consumption, it’s basically an arbitrary number that the system should eventually stabilize at. AMDs low power CPUs perform that well because they are actually running at 30-50 watts for minutes before throttling down to 15W. It becomes very obvious when comparing their 15W CPUs and their 35W CPUs - they score virtually the same in multicore benchmarks. This does make 15W AMD CPUs a good bang for buck when you are after throughtput, just don’t make the mistake believing they run at 15W…
I agree on all points except on the concept of turbo boost. I don't think it applies to the M1 since performance cores always run at the same clock speed when under load, so long as the SoC is cooled with a fan.
On intel CPUs, the "boost" clock is rarely used on all cores. It Apple were using turbo boost, we'd see the M1 reach, say, 3.4 GHz for brief amonts of time or when using a single core.
 

jeanlain

Member
Oct 26, 2020
149
122
86
All your numbers are incorrect. I could post some real numbers (I have access to multiple systems), but instead I will point you to AnandTech: https://www.anandtech.com/bench/product/2687?vs=2633
31W is for the full Mac Mini, as your other link shows. Andrei (from Anandtech) measured the consumption of a single firestorm core after someone pointed him to the powermetrics command (which he didn't know at the time of the review). An M1 core consume 3.X watts during cinebench, and about 5-6W at most.
Compare to this :

It's not clear where the 34W figure for the 4800U comes form. This page show a difference of 58W between active and idle states.
 
Last edited:
  • Like
Reactions: Viknet

jeanlain

Member
Oct 26, 2020
149
122
86
  1. Profit margins
  2. Platform Control (enabling SaaS, which further lead to #1)
  3. User experience
  4. Performance
Where's your evidence that Apple considers performance a low priority? They could down clock their SoCs by 20% to gain a few hours of battery life, and still be competitive. Yet they decided not to, and instead deliver the fastest smartphones on the market.
Apple won't release 5-kg MacBooks like those ridiculous gaming laptops, that's for sure, but their "pro" machines will be plenty fast.
 
Last edited:
  • Like
Reactions: Doug S

Doug S

Platinum Member
Feb 8, 2020
2,248
3,476
136
You clearly don’t know or understand Apple. I owned a Mac 512k (and owned numerous models between then and now), that is how far back I go with Apple. For modern Apple, the priorities are as follows:

  1. Profit margins
  2. Platform Control (enabling SaaS, which further lead to #1)
  3. User experience
  4. Performance


All your numbers are incorrect. I could post some real numbers (I have access to multiple systems), but instead I will point you to AnandTech: https://www.anandtech.com/bench/product/2687?vs=2633

EDIT: That is comparing to Zen 2 because AT has not reviewed Zen 3 parts as of yet. The power consumption for Zen 3 is similar, with increased performance.

Also see this for Mac Mini testing: https://www.anandtech.com/show/16252/mac-mini-apple-m1-tested


Well I didn't own but often used a friend's Apple II+ as a kid if you want to play a "how far do you go back with Apple" turd flinging contest.

You put down your list like it is gospel, and while Apple considers all those things important there's no reason why we should give your ranking of them any more credence than someone else's. You have already decided that switching to ARM is a bad idea, so you look at everything assuming the worst possible result as that's the only way your prediction (which will look less and less credible with every new ARM based Mac Apple announces) can come to pass. You assume Apple didn't plan ahead when they decided to transition to ARM, and will not be able to scale either their CPU or GPU much beyond the M1. Somehow their proven to be world class architects will be unable to solve a problem that had been solved a half dozen times by RISC vendors 20 years before AMD was ever able to offer a system with a double digit number of cores (and a FAR more difficult problem as they had to scale between sockets and even daughterboards not within a single chip or at most multiple chips in the same MCM)

I'm willing to bet you were one of those who claimed that if Apple switched to ARM they would force users to give up performance in native code, and emulated code performance would be terrible - by design as that's the only thing that could push developers to port. When M1 results, especially those on x86 code, were reported you were probably one of those saying "yeah what about Cinebench" or whatever other benchmark you want to claim is truly representative of real PC performance and that all the benchmarks Apple blew the doors off the Intel Macs the M1 Macs replaced should be ignored as fake or biased.

When Apple releases its next round of Macs which will have 8 big cores assuming the Jade-C rumors are true, you'll be full of excuses "well yeah they were able to get scaling to work to 8 ways, but they'll never scale further" claiming without evidence that it is somehow a much harder task to scale a hardware design from 8 to 32 or 64 cores than it is to scale from 1 to 8 when in fact the opposite is true. When the Mac Pro comes out even if it dominates all the AMD competition in benchmarks you'll seize upon the ones where AMD wins and claim those are the benchmarks that really matter, and anyway Apple is only winning because they have a process advantage over AMD and using Metal instead of DirectX so it isn't even useful for games (which will be true, Apple has never cared about the hardcore gamer market and never will)

But please, tell us what other Macs you've owned and how that qualifies you to lecture us on the order in which Apple's priorities lay, and how that proves they cannot compete with Intel and AMD's performance.