Discussion Apple Silicon SoC thread

Page 34 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eug

Lifer
Mar 11, 2000
23,587
1,001
126
M1
5 nm
Unified memory architecture - LP-DDR4
16 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 12 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache
(Apple claims the 4 high-effiency cores alone perform like a dual-core Intel MacBook Air)

8-core iGPU (but there is a 7-core variant, likely with one inactive core)
128 execution units
Up to 24576 concurrent threads
2.6 Teraflops
82 Gigatexels/s
41 gigapixels/s

16-core neural engine
Secure Enclave
USB 4

Products:
$999 ($899 edu) 13" MacBook Air (fanless) - 18 hour video playback battery life
$699 Mac mini (with fan)
$1299 ($1199 edu) 13" MacBook Pro (with fan) - 20 hour video playback battery life

Memory options 8 GB and 16 GB. No 32 GB option (unless you go Intel).

It should be noted that the M1 chip in these three Macs is the same (aside from GPU core number). Basically, Apple is taking the same approach which these chips as they do the iPhones and iPads. Just one SKU (excluding the X variants), which is the same across all iDevices (aside from maybe slight clock speed differences occasionally).

EDIT:

Screen-Shot-2021-10-18-at-1.20.47-PM.jpg

M1 Pro 8-core CPU (6+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 16-core GPU
M1 Max 10-core CPU (8+2), 24-core GPU
M1 Max 10-core CPU (8+2), 32-core GPU

M1 Pro and M1 Max discussion here:


M1 Ultra discussion here:


M2 discussion here:


Second Generation 5 nm
Unified memory architecture - LPDDR5, up to 24 GB and 100 GB/s
20 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 16 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache

10-core iGPU (but there is an 8-core variant)
3.6 Teraflops

16-core neural engine
Secure Enclave
USB 4

Hardware acceleration for 8K h.264, h.264, ProRes

M3 Family discussion here:

 
Last edited:

nxre

Member
Nov 19, 2020
60
103
66
It's crazy how so many people are still in denial phase. No x86 uarch will match Apple Silicon for PPW for years. Absolute performance is debatable, but PPW is not. I've seen the most ridiculous excuses these past few days attributing their lead to 5nm (ignoring that even if AMD wanted to, their design would never allow them to be early adopters of a bleeding edge node because of the absurd clock speeds requirements that require mature nodes). TSMC 5nm literally just reduced power by 10% for the same performance (https://www.anandtech.com/show/16226/apple-silicon-m1-a14-deep-dive/3). Or pretending Apple is playing some secret trick on their sillicon that other companies could play but didn't and if they do they would destroy Apple(?), especially when talking about their cache hierarchy. Apple's cache hierarchy is fundamentally different from ANY other in the industry, it's not about size, but execution.
Then there is the absurd accelerator excuse. Ignoring that Apple CPU alone is matching the best from either AMD or Intel, the idea that accelerators are somehow 'bad' and 'unfair' completely ignores the paradigm shift in computing for decades. Accelerators ARE the future and there is great power to be achieved by them rather than counting on a general design such as a CPU do to everything. Its why Servers are increasingly moving to GPUs.
Sure, you can debate wether Apple's design will be able to scale and match the multicore beasts from intel and AMD for desktops, that is valid given we have no concrete data that would prove us that Apple will suceed on that.
But debating Apple PPW lead and uarch achievements is pure denial at this point. The benchmarks are there, their uarch has been developing for 10years on mobile and we know how performant and efficient it is. Just accept it and move on. And talking about wether future chips from intel or AMD would match it is not as smart as it seems: it would mean Apple is 1year ahead of the best x86 designs.
 

DrMrLordX

Lifer
Apr 27, 2000
21,643
10,860
136
No x86 uarch will match Apple Silicon for PPW for years. Absolute performance is debatable, but PPW is not.

Ah, now we're getting down to the brass tacks. Good. And you're right - at least as far as existing x86 implementations go. Intel doesn't look like they've got anything marketable in the pipe that will change that picture, either. AMD's R&D priorities are different enough that maybe they don't care very much. Yet. We'll see how that goes.
 
  • Like
Reactions: Tlh97 and lobz

nxre

Member
Nov 19, 2020
60
103
66
Ah, now we're getting down to the brass tacks. Good. And you're right - at least as far as existing x86 implementations go. Intel doesn't look like they've got anything marketable in the pipe that will change that picture, either. AMD's R&D priorities are different enough that maybe they don't care very much. Yet. We'll see how that goes.
We also have to keep in mind cpu designs are planned years in advance. M1 impact or lack of impact on x86 designs will only be felt on 2022/2023 designs the earliest. Its lunacy to believe either amd or intel will change their 2021 designs to compete with Apple on ppw, when Apple isnt really threatening their market share yet.
 

DrMrLordX

Lifer
Apr 27, 2000
21,643
10,860
136
We also have to keep in mind cpu designs are planned years in advance. M1 impact or lack of impact on x86 designs will only be felt on 2022/2023 designs the earliest. Its lunacy to believe either amd or intel will change their 2021 designs to compete with Apple on ppw, when Apple isnt really threatening their market share yet.

I think it would be naive to assume that Intel, AMD, and others have been ignoring the A-series SoCs since about A8.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,228
5,228
136
I think it would be naive to assume that Intel, AMD, and others have been ignoring the A-series SoCs since about A8.

They may keep tabs on it, but it is not remotely as important as the direct AMD vs Intel battle, since Apple doesn't sell CPUs, so they aren't competing for any of their contracts, except for the Apple Mac contract, which Intel clearly lost...
 
Apr 30, 2020
68
170
76
Then there is the absurd accelerator excuse. Ignoring that Apple CPU alone is matching the best from either AMD or Intel, the idea that accelerators are somehow 'bad' and 'unfair' completely ignores the paradigm shift in computing for decades. Accelerators ARE the future and there is great power to be achieved by them rather than counting on a general design such as a CPU do to everything. Its why Servers are increasingly moving to GPUs.
I'm not 100% convinced that accelerators are the future. Maybe short term, but I doubt it long term. The industry goes through cycles of focusing on "general purpose" computing elements to "bespoke compute elements".

The first 3D graphics adapters had bespoke individual discreet components (or "accelerators" if you will). They literally had a chip to do texture mapping, another chip for rasterization, another chip for geometry, and so on. They integrated all these chips together eventually, but each GPU die still had fixed function hardware. You had components that only did geometry, components that only did pixel shading, components that only did X. We even had dedicated physics processing cards (PhysX wasn't just an nvidia proprietary software stack - it was a dedicated physics accelerator). But the problem with this approach is that you can have a ton of unused silicon at any given time. If I'm not doing any encoding, raytracing, or tessellation, or have a scene with very little geometry - it's ridiculous to have all that silicon sitting there idle.

That's why modern GPUs switched to "general purpose" architectures in the first place in ~2006. No more dedicated "pixel shaders" no more dedicated "geometry engines" and so on. You had multipurpose elements in your silicon that could do anything that was needed. It was a much better use of hardware.

Now CPUs and GPUs are heading in the opposite direction with a dedicated hardware video encode/decoder, dedicated AI-accelerators, dedicated RT accelerators, dedicated encryption hardware, dedicated this, dedicated that. Eventually manufacturers are going to get sick of burning 70% of their die area on discreet components that only get used some of the time, by some users and will figure out how to get things performant using a general purpose architecture.
 

cortexa99

Senior member
Jul 2, 2018
319
505
136
I suspect the difference between CISC & RISC cause this huge PPW gap, which means it is not reversible, almost no x86 chips could compete.

What Apple do is correct and revolutionary, when market widely accept this brand new Mac, other manufacturer would folllow, and x86's death is on the schedule......
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,228
5,228
136
Fun fact: many people are more impressed by the M1 GPU than by its CPU.
The chief reason is that there is a clear distinction between discrete and integrated GPUs, and that computers iGPUs have been historically very weak. But if PCs had console-like APUs, the M1 GPU would be very far from these in performance.
In perf/W, the M1 Firestorm core should be about 4X better (single-thread) than TGL for instance (about >5W/core for the M1, and 21W for the 1185G7).
The GPU is, according to Apple, only 3X better than the competition in perf/W.


I'm more impressed by the "whole" than I am by any of the individual parts, and especially impressed by the final product results.

Read/watch the "hands on" of the actual devices. People are blown away by how absurdly powerful, cool and quiet these devices are.

For years we keep seeing higher Cinebench scores from AMD/Intel, but that really hasn't translated into a big shift in user experience. This DOES.

I watched a Video of someone who had a 8GG/256GB Mini to test for a week. He said it was by far the best Video editing machine he ever used. Better than the 8 Core/32GB machine on his desk.

People nit pick about only 16GB RAM or some other spec minutia, but the truth is these machines are put all the pieces together such that they can seemingly outperform their paper specs in actual usage.

I have absorbed many reviews and their is almost awe at how cool and quiet these machines are. Obviously the MBA never makes noise, but it also stay cool, unlike most fan-less laptops, and it take almost 10 minutes of constant full core use to throttle.

But it's the machines with active cooling that people seem most surprised by, as most say they can't actually be heard, even when heavy benchmarking.

This is a big leap in the overall product.
 

Roland00Address

Platinum Member
Dec 17, 2008
2,196
260
126
Notebookcheck?

What computers?
I hope to see M1 benchmarks of the Air, Pro, Mini running through ancient cinebench 11.5 so I can compare it to old computers that already had 11.5 run on them with websites like AnandTech and NotebookCheck.

I want to see how much growth we had since the core2quad, sandy, atom (Silvermont aka Baytrail), etc.

Mainly for “ego” reasons 😄
 
  • Haha
Reactions: MangoX

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
You have to wonder if hardware acceleration is as potent as you claim (and it definitely is, performance wise), why would anybody with sense use the CPU for encoding? The reason is because using the CPU produces superior quality and smaller file sizes. Now you can as you say achieve similar quality with hardware encoders by using higher bitrates, but the resulting output files will be much larger than what a CPU based encoder can do.

So if you like quality and efficiency, CPU encoding is king. If you like speed and convenience, hardware accelerated encoding is king.

If you check the professional sites, they show the quality issue disappears at bitrates over ~6Mbit/s. That is actually a quite low bitrate. Most tests are done at something like 20 or 40 Mbit, where it makes no difference.
 

mikegg

Golden Member
Jan 30, 2010
1,756
411
136
It's crazy how so many people are still in denial phase. No x86 uarch will match Apple Silicon for PPW for years. Absolute performance is debatable, but PPW is not. I've seen the most ridiculous excuses these past few days attributing their lead to 5nm (ignoring that even if AMD wanted to, their design would never allow them to be early adopters of a bleeding edge node because of the absurd clock speeds requirements that require mature nodes).
You have to understand where you are: ultra-niche enthusiasts, PC master race forum.

There are a ton of people here who will point to one meaningless benchmark where the M1 loses and write "see, I told you it isn't the best chip for everything".

These are the people who won't buy into the Apple ecosystem and are butthurt because they can no longer claim that they have the fastest hardware. Any college freshman with a Macbook Air now has better performance in a huge amount of popular applications than their $800 AMD 5950x.

It was a similar transition in the phone world. Back then, a lot of people claim that if you bought Apple, you're getting less performance for the money. It hasn't been true for a long time because a $400 iPhone SE is faster than any Android phone, period. If you bought an Android phone over an Apple phone, you're getting less performance for your money.
 
  • Like
Reactions: Mopetar and shady28

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
7,413
2,446
146
So how is gaming performance on these? A CPU is no good without a good GPU...
 

DrMrLordX

Lifer
Apr 27, 2000
21,643
10,860
136
I suspect the difference between CISC & RISC

Oh good, was wondering when someone would bring that up.

They may keep tabs on it, but it is not remotely as important as the direct AMD vs Intel battle, since Apple doesn't sell CPUs, so they aren't competing for any of their contracts, except for the Apple Mac contract, which Intel clearly lost...

Believe that both Intel's and AMD's engineering talent is looking very, very closely at all of Apple's SoCs. And have been for years.

one meaningless benchmark

Meaningless because the M1 didn't win it outright?

These are the people who won't buy into the Apple ecosystem

It's almost as if people have good reason not to do so.

and are butthurt because they can no longer claim that they have the fastest hardware.

Now you're just being silly.
 
Last edited:

Eug

Lifer
Mar 11, 2000
23,587
1,001
126
I'm more impressed by the "whole" than I am by any of the individual parts, and especially impressed by the final product results.
The part that surprised me a bit is the full DRAM on package. It makes sense, but I didn't think they'd do it. I'm still wondering what they'll do for the higher end Macs. I can see up to 64 GB being feasible, but will they actually do that in-package? What about 128 GB?

Read/watch the "hands on" of the actual devices. People are blown away by how absurdly powerful, cool and quiet these devices are.

For years we keep seeing higher Cinebench scores from AMD/Intel, but that really hasn't translated into a big shift in user experience. This DOES.

I watched a Video of someone who had a 8GG/256GB Mini to test for a week. He said it was by far the best Video editing machine he ever used. Better than the 8 Core/32GB machine on his desk.
Yes, I am in the firm belief that Apple purpose built this chip (and the A series chips beforehand in preparation) to be smooth for video editors. No I don't think Apple is only targeting them, but this has been a very big issue for a long time, and the previous solutions have mainly been to add more cores to brute force it, with some GPU acceleration thrown in. This hasn't really fared all that well (unless you had an unlimited budget) since the codecs have also advanced at the same time. Who in 2010 would have thought that by 2015 we'd be recording 4Kp30 video on our phones?

Well, not only did we get to record 4Kp30 on those phones (or maybe even higher on some Android phones), Apple added activated hardware HEVC encoding of those 4Kp30 videos a couple of years later, so the hardware was already in place. They would have been planning for this since before 2012. I note that the 64-bit A7 came out in 2013.

Now in 2020, I can record 4Kp60 10-bit HDR Dolby Vision on my phone if I want. And remember, this is hardware encode, not just decode acceleration.

BTW, Apple bought PA Semi in 2008. Apple certainly didn't waste any time putting their new chip architects to work.
I have absorbed many reviews and their is almost awe at how cool and quiet these machines are. Obviously the MBA never makes noise, but it also stay cool, unlike most fan-less laptops, and it take almost 10 minutes of constant full core use to throttle.

But it's the machines with active cooling that people seem most surprised by, as most say they can't actually be heard, even when heavy benchmarking.

This is a big leap in the overall product.
Yes, this and the battery life. My last Mac laptop was a 12" MacBook, both because it was small and because it is fanless. My machine of choice if I were buying today would be the 13.3" MacBook Air, again because it's fanless, but given that it takes real extended workloads now to ramp up the fan on the Pro, I'd consider that too now because it's almost always quiet, and because it's got longer battery life too. Well, I would consider it didn't have that stupid Touch Bar. :rolleyes: At least the good news for the past little while though is that the ESC key is a real physical key. When Apple removed the ESC key, even the most diehard Apple zealots revolted. So they finally brought it back, thank the gods.

I still wish for a true 12" MacBook successor though. Mine is only 2 lbs. The MacBook Air weighs 40% more.

Maybe in 2021 when Apple revamps their laptop form factors, they can go with say a 12.6" MacBook (Air) at 2.3 lbs, a 14" MacBook Pro, and a 16" MacBook Pro. That would be perfect.
 

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
You have to understand where you are: ultra-niche enthusiasts, PC master race forum.

There are a ton of people here who will point to one meaningless benchmark where the M1 loses and write "see, I told you it isn't the best chip for everything".

These are the people who won't buy into the Apple ecosystem and are butthurt because they can no longer claim that they have the fastest hardware. Any college freshman with a Macbook Air now has better performance in a huge amount of popular applications than their $800 AMD 5950x.

It was a similar transition in the phone world. Back then, a lot of people claim that if you bought Apple, you're getting less performance for the money. It hasn't been true for a long time because a $400 iPhone SE is faster than any Android phone, period. If you bought an Android phone over an Apple phone, you're getting less performance for your money.

It's like this in many other places too though, people call it confirmation bias but it really has to do with investment in time and technology. It's worth noting that for many people in the world, a $500 phone is akin to someone stateside buying a brand new car. So that phone represents a significant investment for them.
 

insertcarehere

Senior member
Jan 17, 2013
639
607
136
I'm more impressed by the "whole" than I am by any of the individual parts, and especially impressed by the final product results.

Read/watch the "hands on" of the actual devices. People are blown away by how absurdly powerful, cool and quiet these devices are.

For years we keep seeing higher Cinebench scores from AMD/Intel, but that really hasn't translated into a big shift in user experience. This DOES.

I watched a Video of someone who had a 8GG/256GB Mini to test for a week. He said it was by far the best Video editing machine he ever used. Better than the 8 Core/32GB machine on his desk.

People nit pick about only 16GB RAM or some other spec minutia, but the truth is these machines are put all the pieces together such that they can seemingly outperform their paper specs in actual usage.

I have absorbed many reviews and their is almost awe at how cool and quiet these machines are. Obviously the MBA never makes noise, but it also stay cool, unlike most fan-less laptops, and it take almost 10 minutes of constant full core use to throttle.

But it's the machines with active cooling that people seem most surprised by, as most say they can't actually be heard, even when heavy benchmarking.

This is a big leap in the overall product.

Indeed, Intel Macbooks that the M1 replaces have been pretty infamous for overheating and thermal throttling for years now, switching from those to the M1 would be a large experience upgrade even ignoring the performance uplift.
 

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
For most users who just want a laptop that works and runs some popular software that you can get on MacOS or Windows (or just MacOS alone), it looks like a great deal.

For me that's the weird part as these people don't actually need the performance or benefit from it nor know what ARM or x86 is. For the people that do know and do care about performance, the risk of having incompatibility might actually push them away from mac. Even if we could have macs at work (we can't) I wouldn't get an ARM based one. Just too unpredictable what might not work and too much of a risk of wasted time. And at home I have no interest in tinkering around to get stuff working. I just need it to work out of the box (and that doesn't mean just office and internet usage but also programming tools etc.) I know it works on Windows or linux. So not worth it to risk it especially given the pricing.

Apple could probably made this slower in all aspects and on 7nm and sell it for $599 and it would be a much, much bigger win, financially and user-base wise. Les so tech-wise.
 
  • Like
Reactions: twjr

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
That's partially because they wanted to share with their phones, but more because if you're going to move people over to a new architecture etc etc, you need to give them very strong motivation.

So for the first time out you set to destroy the previous line up, not just edge past it. They might well keep the subsequent update cycle a bit slower than the iPhones.
 

amrnuke

Golden Member
Apr 24, 2019
1,181
1,772
136
For me that's the weird part as these people don't actually need the performance or benefit from it nor know what ARM or x86 is. For the people that do know and do care about performance, the risk of having incompatibility might actually push them away from mac. Even if we could have macs at work (we can't) I wouldn't get an ARM based one. Just too unpredictable what might not work and too much of a risk of wasted time. And at home I have no interest in tinkering around to get stuff working. I just need it to work out of the box (and that doesn't mean just office and internet usage but also programming tools etc.) I know it works on Windows or linux. So not worth it to risk it especially given the pricing.

Apple could probably made this slower in all aspects and on 7nm and sell it for $599 and it would be a much, much bigger win, financially and user-base wise. Les so tech-wise.
Same, and any device that has an M1 in it is not a "great deal" at $699+ for those use cases mentioned (a laptop that works and runs some popular software that you can get on MacOS or Windows).

IMO, a great deal for that use case is a laptop with a 4500U for under $700.
 

Eug

Lifer
Mar 11, 2000
23,587
1,001
126
Same, and any device that has an M1 in it is not a "great deal" at $699+ for those use cases mentioned (a laptop that works and runs some popular software that you can get on MacOS or Windows).

IMO, a great deal for that use case is a laptop with a 4500U for under $700.
Just the battery life alone is a big incentive. Lack of fan noise is another one.
 

Eug

Lifer
Mar 11, 2000
23,587
1,001
126
How much do you think Apple saves per machine with the new chip?

This guy (who is a VP at IBM and who used to be a GM at NVIDIA) breaks it down with his back-of-the-napkin calculations:


He (and others) are claiming a $150-$200ish savings per machine. Somehow that seems too high to me but he'd know better than I would I guess.

Still, even if the savings per unit were much, much less, that'd still translate into billions of US bux per year. So, not only does Apple get much better performance (per watt) with Mx chips, they also save $billions in the process every year.

And that's not even including the Ax chips.
 
Last edited:
  • Like
Reactions: Tlh97 and Gideon

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
Just the battery life alone is a big incentive. Lack of fan noise is another one.

OK, here it probably matters how much you travel especially in airplanes and at day time ( I usually book long flights over night and just sleep). I mean how often do you need 15hrs of battery life? I can't think of ever having needed it. 5 hrs is too risky, 8hrs is much better, 10hrs to be save but then it starts to get diminish returns very quickly. So many places you can charge now and the rare occasion you can't and need that 15hrs, you can bring along one or 2 powerbanks or even solar-chargeable ones for example when camping. At some point going thinner and lighter is probably better.

Noise I can agree partially but my old 15" 6-core with GPU work laptop is dead quiet when web browsing or writing emails and it doesn't really bother me as an open-plan office is much louder anyways or you wear headphones when traveling etc. For the average user use case even this box would run essentially fan-less.

EDIT: Ok just saw your edit now. That article completely omits R&D costs and uses what I think too high prices for the intel CPUs. The M1 migh cost $50 if you only count the wafer and ignore the actual SOC design (electrical engineering, which Intel did) and then the upfront process costs like masks etc (which intel had). So the comparison is idiotic.

Going by this infamous chart:

nano3.png


Apple now has to spend that whole stack themselves which was previously carried by intel. If we assume 7nm costs fo $300m per design and divide it by the 22mio Macs sold, that means, what 13Mio per device? hm somehing doesn't add up. Of course a large part of R&D is shared with iphones as well so that explains it? but these costs are sky high and the article you mention simply ignores them 100%.

EDIT2: that should have actual been added to my other post
 
Last edited:
  • Like
Reactions: twjr

Panino Manino

Senior member
Jan 28, 2017
821
1,022
136
I'm trying, but something on my mind is preventing me from getting more impressed with this M1.
I mean, is a surprise that 6 ALU performs better than 4 ALU, for example? Yes, Apple deserves praise for delivering such a wide architecture, and they didn't started yesterday, they're doing it for years now. But, even so, even with such a wider architecture the x86 competition still gets so close, it's really that impressive? Both AMD and Intel seems to bet transitioning for a bit wider architectures, can't we expect a similar boost in performance? The real M1 advantage is all the other specialized hardware built-in, but again this seems to be becoming a industry standard, AMD even joined with Xilinx.

This good for Apple and will increase their market share a bit, but no reason to fuss so much about. At least, I can't even if I try (for some reason :confused_old:).
 

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
So, how much do you think Apple saves per machine with the new chip?

If you factor in R&D of the SOC itself and more so production costs (masks etc, TSMC isn't cheap especially not purchasing all the bleeding edge capacity) I'm betting they actual pay significantly more than before.

I figure with all the manufactures whining about high process costs since 16nm, I wonder where the limit is for number of chips you need to sell to still make a profit. I don't think there can be more than 2 tiers. I can't think they sell that many MacPros to get the costs back in (well unless the sell them starting from 10k or something absurd like that). Or the actual costs have been way overblown.