Discussion Apple Silicon SoC thread

Page 31 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eug

Lifer
Mar 11, 2000
23,583
996
126
M1
5 nm
Unified memory architecture - LP-DDR4
16 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 12 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache
(Apple claims the 4 high-effiency cores alone perform like a dual-core Intel MacBook Air)

8-core iGPU (but there is a 7-core variant, likely with one inactive core)
128 execution units
Up to 24576 concurrent threads
2.6 Teraflops
82 Gigatexels/s
41 gigapixels/s

16-core neural engine
Secure Enclave
USB 4

Products:
$999 ($899 edu) 13" MacBook Air (fanless) - 18 hour video playback battery life
$699 Mac mini (with fan)
$1299 ($1199 edu) 13" MacBook Pro (with fan) - 20 hour video playback battery life

Memory options 8 GB and 16 GB. No 32 GB option (unless you go Intel).

It should be noted that the M1 chip in these three Macs is the same (aside from GPU core number). Basically, Apple is taking the same approach which these chips as they do the iPhones and iPads. Just one SKU (excluding the X variants), which is the same across all iDevices (aside from maybe slight clock speed differences occasionally).

EDIT:

Screen-Shot-2021-10-18-at-1.20.47-PM.jpg

M1 Pro 8-core CPU (6+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 16-core GPU
M1 Max 10-core CPU (8+2), 24-core GPU
M1 Max 10-core CPU (8+2), 32-core GPU

M1 Pro and M1 Max discussion here:


M1 Ultra discussion here:


M2 discussion here:


Second Generation 5 nm
Unified memory architecture - LPDDR5, up to 24 GB and 100 GB/s
20 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 16 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache

10-core iGPU (but there is an 8-core variant)
3.6 Teraflops

16-core neural engine
Secure Enclave
USB 4

Hardware acceleration for 8K h.264, h.264, ProRes

M3 Family discussion here:

 
Last edited:

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
It's not only about the OS. I can install Linux on my PC if for some reason MS goes complete bonkers.

With android it depends if the phone can be rooted but even without it you can install different App stores or apks directly. Ok, I assume that's possible on macOS too but not iOS/iPhones but with this change you can't install a different OS on the new Macbooks if apple goes bonkers or you have a reason to trust them even less.
(see more below



True and at this point it doesn't yet matter that much. I'm thinking long term and in terms of abndaoning free spech and moving to censorship which is getting worse and worse with the increased "political correctness" movements. I can't even explicitly write here what I mean or else I get at least a temp ban. Big corps will more and more censor the "uncommon opinion". It's already forbidden in apps themselves. Next developers that voice their uncommon opinion will get their keys revoked ( couple years max) and then at one point they lock users which "voice" an unpopular opinion out of their device(s). Basically more and more like in China.
Speaking of the end user, I'd like to disagree with you both. Closed ecosystems lead to $999 display stands.
 
  • Love
Reactions: Thunder 57

Hitman928

Diamond Member
Apr 15, 2012
5,180
7,631
136
I pretty much agree fully there. Very impressed with the GPU. I will also say I am extremely impressed with Apple's x86 emulator though, that's a FAST emulator. Clearly a tremendous amount of engineering resources went into that emulator. Clearly a tremendous amount of resources went into the initial software offerings for the Apple ARM platform in general. I do wonder if that same high quality and super-fast code will be the norm, though? Apple obviously had tremendous reason to make sure the debut software was tip-top and super optimized.

Really the only thing that "annoyed" me about the review was the Speedometer benchmark. Which browsers were they using? I feel like the other fair test would have been to use the same browser on every platform.

The context of his post is in reference to the uncore power, not necessarily to the connectivity and I/O. The answer is, the M1 has very little I/O. No PCI-E lanes, No SATA. It's got a single display PHY (well, not counting the thunderbolt monitor muxing), a handful of USB/Thunderbolt PHYs, Ethernet, single audio port, and that's about it. Desktop SOC from Intel and AMD have 4x the I/O going on, which contributes heavily to "uncore" power consumption. Especially the memory controller needing to drive a fat 128-bit bus across a PCB. These things just cannot be overcome while still retaining the characteristics of a typical PC notebook.

Thank you for the info. I knew M1 had much reduced IO compared to x86 (laptop or desktop) chips, I just didn't know how much. Putting memory so close (physically and electrically) to the SOC obviously helps a lot to reduce IO power without degrading signal integrity, but even then, 0.1 W seemed crazy low. Anyway, Andrei also showed power when running other loads. I guess in the instantaneous snapshot I saw earlier, there was basically no memory use or probably hardly any IO activity at all which explains why the package power was so low.

In SPEC, specifically libquantum, which hits the memory pretty hard, the package power jumped to 2.7W. In the Aztec GPU benchmark, package power was 1.4 W. These are much more realistic numbers. Still very impressive to be that low, but as you mention, it does come with some compromises. It also seems like package power does not effect max allowed core power (outside of thermal limits) as shown in Andrei's all out GPU+CPU test where the package power is 1.5W and the CPU is still using 21W. Anyway, it's all very interesting seeing the choices made and their effect. I'm excited to see how this all unfolds over the next few generations.
 
Last edited:

mikegg

Golden Member
Jan 30, 2010
1,740
406
136
Oh good, now you get to create your own definitions of common phraseology as well?
No, I didn't make up any terminology. You're just butt hurt.

When people say x chip is the fastest, they think it's the fastest overall. No one thinks x chip in everything.

People say Zen3 is the fastest desktop chip you can buy. But Zen3 does lose in some benchmarks to Comet Lake.

According to my "definition", I would still consider Zen3 to be faster than Comet Lake.

Same thing for M1. It loses in some benchmarks, sure but it's still the fastest laptop chip you can buy. Period.


100% of people don't use GB5 (to do any meaningful work). At least Cinema4D is good for something.



Oh good, now you get to create your own definitions of common phraseology as well?

The M1 is an impressive chip. Bravo to Apple for rolling this out - it's a shame we can't pick a different operating system for it (Linux) or get different configurations. That's Apple for you. Sadly some users seem intent upon making people hate the M1 by misrepresenting it or simply being obstreperous.
Again, nothing I said was wrong. The M1 is the fastest laptop chip you can buy, period. Prove it otherwise.

And no, the Cinebench multi-core benchmark result isn't going to change this.

It seems like you have an innate hatred of Apple/M1 already because you don't use Apple products which means any non-nerd buying a the most basic Macbook now has a chip faster than your enthusiast laptop PC. That's fine. But it still doesn't change the fact that the M1 is the fastest laptop chip you can buy. Period.
 
Last edited:

DrMrLordX

Lifer
Apr 27, 2000
21,582
10,785
136
Again, nothing I said was wrong. The M1 is the fastest laptop chip you can buy, period. Prove it otherwise.

They already did! Oh man you are a piece of work. But hey keep on downvoting if you don't like people.

btw Cinebench totally counts. So will all the other multithreaded benchmarks the 4900H and 4900HS can win. Doesn't mean you should want those chips over an M1, but your statements are just stupid.

Seems like you have innate hatred of basic logic and decorum.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,710
3,554
136
True and at this point it doesn't yet matter that much. I'm thinking long term and in terms of abndaoning free spech and moving to censorship which is getting worse and worse with the increased "political correctness" movements. I can't even explicitly write here what I mean or else I get at least a temp ban. Big corps will more and more censor the "uncommon opinion". It's already forbidden in apps themselves. Next developers that voice their uncommon opinion will get their keys revoked ( couple years max) and then at one point they lock users which "voice" an unpopular opinion out of their device(s). Basically more and more like in China.
You'd be naive to assume that political correctness doesn't exist within the FOSS community.
 

mikegg

Golden Member
Jan 30, 2010
1,740
406
136
They already did! Oh man you are a piece of work. But hey keep on downvoting if you don't like people.

btw Cinebench totally counts. So will all the other multithreaded benchmarks the 4900H and 4900HS can win. Doesn't mean you should want those chips over an M1, but your statements are just stupid.

Seems like you have innate hatred of basic logic and decorum.
So which laptop chip is faster than the M1 overall? I'm waiting.
 

IvanKaramazov

Member
Jun 29, 2020
56
102
66
It seems like you have an innate hatred of Apple/M1 already because you don't use Apple products which means any non-nerd buying a the most basic Macbook now has a chip faster than your enthusiast laptop PC. That's fine. But it still doesn't change the fact that the M1 is the fastest laptop chip you can buy. Period.

I'm really not sure what you're trying to achieve with this. The M1 is a fantastic chip, which is demonstrably faster than AMD's 4800u, for example, in some workloads, and demonstrably slower in several others. I'm not even sure how one would quantify an "overall" fastest chip, but certainly for a variety of real-life tasks the M1 is not the fastest. For others it is. And thats great!
 
  • Like
Reactions: MangoX and Tlh97

mikegg

Golden Member
Jan 30, 2010
1,740
406
136
I'm really not sure what you're trying to achieve with this. The M1 is a fantastic chip, which is demonstrably faster than AMD's 4800u, for example, in some workloads, and demonstrably slower in several others. I'm not even sure how one would quantify an "overall" fastest chip, but certainly for a variety of real-life tasks the M1 is not the fastest. For others it is. And thats great!
In most CPU benchmarks, the M1 is faster. And it provides real-life experiences that are faster such as opening apps, having instant-on capability, encoding/transcoding in MacOS due to accelerators, editing 4k Cannon R5 footage, and machine learning acceleration.

Comet Lake wins some benchmarks against Zen3, but everyone says Zen3 is the fastest desktop processor now. It's not controversial.

Saying the M1 is the fastest laptop chip isn't controvesial. It clearly is for most real-world applications.
 
Last edited:

IvanKaramazov

Member
Jun 29, 2020
56
102
66
Looking into Cinebench a bit more...

So much of the discussion about the M1 is obfuscated by the fact that Intel and AMD have very different definitions of TDP, and Apple doesn't use it at all. It's hard to find power draw numbers for a lot of this stuff, so I'm curious if others here can test this directly.

We saw Andrei's numbers that Cinebench R23 caused the M1 CPU cores to pull 15w. M1 can apparently run hotter elsewhere, but the Cinebench numbers at least are based on 15w power draw.

I've been trying to find similar power draw figures for Renoir but it's challenging. Notebookcheck purportedly has Median Power Draw figures for each chip running Cinebench R15 and reports the following:

- 4700U - 6874 MC on R23 - 38w median power consumption on R15
- 4800U - 10156 MC on R23 - 49.5w median power consumption on R15

Those feel high to me. Is there the potential that R15 just allowed for much more insanely high boost clocks? Can anyone check power draw on R23? Are these numbers just wrong? If these power numbers are similar on the most recent Cinebench version for AMD, that is rather important for comparing Renoir v M1 I imagine.

EDIT - I think that must be package power, so the proper M1 comparison is closer to 20w?
 
Last edited:
  • Like
Reactions: Tlh97 and jeanlain

DrMrLordX

Lifer
Apr 27, 2000
21,582
10,785
136
faster than the M1 overall? I'm waiting.

Oh I'm sorry, shifting the goal posts are we?

I'm really not sure what you're trying to achieve with this. The M1 is a fantastic chip, which is demonstrably faster than AMD's 4800u, for example, in some workloads, and demonstrably slower in several others. I'm not even sure how one would quantify an "overall" fastest chip, but certainly for a variety of real-life tasks the M1 is not the fastest. For others it is. And thats great!

Correct.
 
  • Like
Reactions: MangoX and Tlh97

mikegg

Golden Member
Jan 30, 2010
1,740
406
136
Oh I'm sorry, shifting the goal posts are we?
What was my original goal post and what's the new one? It hasn't changed.

The M1's CPU is the fastest overall amongst any laptop CPUs.

The M1 chip itself as a package is the fastest amongst any laptop chip.

Nothing has changed. Period.

I'm still waiting.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,222
5,224
136
I know as nerds it's our tendency to laser focus in on some element that can be measured and then fight over it.

But with M1 it really seems like the whole is greater than the sum of it's parts.

IMO, it's clearly the best laptop chip.

On top of great CPU performance, it has great iGPU, and great integration of support systems. Great battery life, and it runs absurdly cool and quiet.

The fan in the M1 MBP is so quiet most people can't tell when it is running. Gruber used his for a week and never heard the fan:
https://daringfireball.net/2020/11/the_m1_macs
I’ve never once heard it in an entire week.

Never. Not once. Not a whisper.

I presume it has engaged at times when I’ve taxed the system. Again, the Cinebench multi-core CPU benchmark taxes every available CPU core for 10 minutes. But if the active cooling system has kicked in, I never heard it. I’ve never felt any air moving out of the vents, either. This is nothing at all like the fans on Intel-based MacBooks, which, if you’re not familiar, you can definitely hear, to say the least.

The end of this year has seen a lot of decent tech launches NVidia Ampere, Zen 3, and AMD RDNA 2 being reviewed right now.

But hands down, the M1 Macs are the most impressive thing I have seen, and I am no Apple fan. I have never owned a single Apple product in my life. They just did an amazing job here.
 

Eug

Lifer
Mar 11, 2000
23,583
996
126
Looking into Cinebench a bit more...

So much of the discussion about the M1 is obfuscated by the fact that Intel and AMD have very different definitions of TDP, and Apple doesn't use it at all. It's hard to find power draw numbers for a lot of this stuff, so I'm curious if others here can test this directly.

We saw Andrei's numbers that Cinebench R23 caused the M1 CPU cores to pull 15w. M1 can apparently run hotter elsewhere, but the Cinebench numbers at least are based on 15w power draw.

I've been trying to find similar power draw figures for Renoir but it's challenging. Notebookcheck purportedly has Median Power Draw figures for each chip running Cinebench R15 and reports the following:

- 4700U - 6874 MC on R23 - 38w median power consumption on R15
- 4800U - 10156 MC on R23 - 49.5w median power consumption on R15

Those feel high to me. Is there the potential that R15 just allowed for much more insanely high boost clocks? Can anyone check power draw on R23? Are these numbers just wrong? If these power numbers are similar on the most recent Cinebench version for AMD, that is rather important for comparing Renoir v M1 I imagine.

EDIT - I think that must be package power, so the proper M1 comparison is closer to 20w?
How is Andrei defining TDP? Cuz AFAIK, Intel's definition of TDP has never been maximum total package power.
 
  • Like
Reactions: moinmoin

IvanKaramazov

Member
Jun 29, 2020
56
102
66
How is Andrei defining TDP? Cuz AFAIK, Intel's definition of TDP has never been maximum total package power.

Is it the power consumption of the CPU package or the whole system?

I don't think Andrei used the term TDP, he simply measured the CPU power draw for Cinebench as 15w, whereas he measures it elsewhere going as high as 24w iirc. I think the Notebookcheck scores are using an external monitor and thus monitoring whole system power draw for those Renoir Cinebench figures, but that still may suggest (I'd love for people to measure for themselves here to confirm) that the M1 is achieving its score at around half the power draw of the Renoir chips.

IMO, it's clearly the best laptop chip.

This I 100% agree with. Unless your bread and butter are those particular workloads where a 4800u or even a Comet Lake H-series chip may have a slight edge. Even then though, I expect the M-series chips that eventually make it into Apple's upper-level pro systems are going to be quite something.
 
  • Like
Reactions: Tlh97

IvanKaramazov

Member
Jun 29, 2020
56
102
66
Some more details here: https://www.ultrabookreview.com/41494-lenovo-ideapad-7-slim-review/

Looks like in performance mode the 4800u draws a bit over 30w (and 107 degrees C) for a few runs, and then levels outs around 26-27w. So yes, the M1 is running at roughly half the power.

EDIT - should have said this is the Lenovo Slim 7. Obviously temperature and operating frequency is hugely determined by the manufacturer as well.
 
  • Like
Reactions: Tlh97

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
No, I didn't make up any terminology. You're just butt hurt.

When people say x chip is the fastest, they think it's the fastest overall. No one thinks x chip in everything.

People say Zen3 is the fastest desktop chip you can buy. But Zen3 does lose in some benchmarks to Comet Lake.

According to my "definition", I would still consider Zen3 to be faster than Comet Lake.

Same thing for M1. It loses in some benchmarks, sure but it's still the fastest laptop chip you can buy. Period.



Again, nothing I said was wrong. The M1 is the fastest laptop chip you can buy, period. Prove it otherwise.

And no, the Cinebench multi-core benchmark result isn't going to change this.

It seems like you have an innate hatred of Apple/M1 already because you don't use Apple products which means any non-nerd buying a the most basic Macbook now has a chip faster than your enthusiast laptop PC. That's fine. But it still doesn't change the fact that the M1 is the fastest laptop chip you can buy. Period.
Wow dude, this comment was really something. Might as well come out and say: "I am the greatest man alive and therefore it's your job to prove it otherwise".

Why would it be anyone's job to prove the contrary of someone's subjective statement?
 

Staples

Diamond Member
Oct 28, 2001
4,952
119
106
How many years do Dell/HP/Google/Samsung support phones and laptops in terms of updates?
Nice try. I am talking about computers. And iPhones and iPads are not supported for 7 years like Macs. 5 years more like it. Mine have all broken before then so this has never been a problem for me. iPads on the other hand, I’ve had to throw out 2 perfectly good ones.
 

SPBHM

Diamond Member
Sep 12, 2012
5,056
409
126
what a beast, really impressive all around,
also rosetta 2 is way better than the microsoft equivalent for windows on arm

the only negative for me is that you are locked to mac os, other than that... a huge win for Apple.
 

videogames101

Diamond Member
Aug 24, 2005
6,777
19
81
Whether M1 specifically is the fastest laptop chip is a needless argument. The pace of innovation is what matters. M2 will in all likelyhood continue Apple's incredible rate of performance improvements, and there is next to no chance Intel pulls even with that pace. I believe we are near or at the inflection point for x86, between M1 and Graviton 2, get ready for Arm everywhere. That being said the issue of common desktop components is still a huge issue as folks rightly point out, but if NVIDIA has there way and buys Arm I suspect they will take care of that in short order.
 

Eug

Lifer
Mar 11, 2000
23,583
996
126
Teardown of Mac mini:


m1-mac-mini-teardown-2.jpg

Not the best paste application for the memory.

Nice try. I am talking about computers. And iPhones and iPads are not supported for 7 years like Macs. 5 years more like it. Mine have all broken before then so this has never been a problem for me. iPads on the other hand, I’ve had to throw out 2 perfectly good ones.
My two iPad Air 2s will continue to get full support until fall 2021, so that would be 7 years. They are both still working fine, in use every day.

My iPhone 6s and iPhone SE will also likely lose support in fall 2021, so that would be 6 years for those.
 
  • Like
Reactions: Mopetar

Doug S

Platinum Member
Feb 8, 2020
2,202
3,405
136
For those complaining about Chrome on M1, the native version comes out tomorrow.

Apparently it was supposed to come out today actually, but sh!t happens...

It makes total sense that Chrome would perform poorly in emulation, because Rosetta 2 works via static translation. Which doesn't work for applications that perform code generation, which a browser does to run Javascript fast.

I don't know how Rosetta 2 handles that sort of thing, but it may throw up its hands and say "I can't translate this because there are jumps to code segments that don't exist until run time" and would have to fall back to JIT to run Chrome.

Maybe someone could try running a web benchmark on Chrome native on an x86 Mac and then emulated on an M1 Mac? Wouldn't be surprised if it was a 3-5x difference. Doesn't matter in the long run, there will be a native version soon if there isn't already. Any developers of applications that use code generation or self modifying code will have a greater incentive than most to port.
 
  • Like
Reactions: Mopetar

Doug S

Platinum Member
Feb 8, 2020
2,202
3,405
136
Right from when ARM Macs were announced, iOS apps on Macs never made any sense, it seems like it would be the same kind of garbage experience Apple is famous for avoiding.

I wonder who, high up in Apple thought this was an idea worth pushing.

They make perfect sense for developers who want to test their iOS apps without using the slow translation layer they had to use on x86 Macs. A quick google claimed there were 1.3 million iOS developers worldwide this June, and macOS is required for developing iOS apps so other than a few maybe using a Hackintosh or making macOS run in a VM they will all be using a Mac. Even if there were ZERO other benefits to doing this, that alone justifies making it possible.

Beyond that though, if there is an app that exists on iOS but not the Mac that does what you need, even if it isn't (today) 100% compatible with all features like going full screen on a laptop/desktop, that's better than not having access to that app.
 
  • Like
Reactions: IvanKaramazov

Doug S

Platinum Member
Feb 8, 2020
2,202
3,405
136
100% of people don't use GB5 (to do any meaningful work). At least Cinema4D is good for something.

GB5 doesn't do any meaningful work, and neither does Cinebench. That Cinema4D is good for something is not an argument against GB5, because some of its subtests do meaningful work like the compiler benchmark. More people run compilers than run Cinema4D, so that subtest alone is more valuable than Cinebench.
 

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
So I'm going to say kudos to Apple for the most important thing so far.

Having these in stock.

The only M1 Mac you have to wait much for per their site is the MacBook Pro 13, which is ~2 weeks. The other SKUs can be had in 3 or 4 days.

I'll not name names, but there are two companies who have effectively been given a pass for paper launches recently. Apple isn't one of them.