Discussion Apple Silicon SoC thread

Page 412 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eug

Lifer
Mar 11, 2000
24,116
1,760
126
M1
5 nm
Unified memory architecture - LP-DDR4
16 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 12 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache
(Apple claims the 4 high-effiency cores alone perform like a dual-core Intel MacBook Air)

8-core iGPU (but there is a 7-core variant, likely with one inactive core)
128 execution units
Up to 24576 concurrent threads
2.6 Teraflops
82 Gigatexels/s
41 gigapixels/s

16-core neural engine
Secure Enclave
USB 4

Products:
$999 ($899 edu) 13" MacBook Air (fanless) - 18 hour video playback battery life
$699 Mac mini (with fan)
$1299 ($1199 edu) 13" MacBook Pro (with fan) - 20 hour video playback battery life

Memory options 8 GB and 16 GB. No 32 GB option (unless you go Intel).

It should be noted that the M1 chip in these three Macs is the same (aside from GPU core number). Basically, Apple is taking the same approach which these chips as they do the iPhones and iPads. Just one SKU (excluding the X variants), which is the same across all iDevices (aside from maybe slight clock speed differences occasionally).

EDIT:

Screen-Shot-2021-10-18-at-1.20.47-PM.jpg

M1 Pro 8-core CPU (6+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 16-core GPU
M1 Max 10-core CPU (8+2), 24-core GPU
M1 Max 10-core CPU (8+2), 32-core GPU

M1 Pro and M1 Max discussion here:


M1 Ultra discussion here:


M2 discussion here:


Second Generation 5 nm
Unified memory architecture - LPDDR5, up to 24 GB and 100 GB/s
20 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 16 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache

10-core iGPU (but there is an 8-core variant)
3.6 Teraflops

16-core neural engine
Secure Enclave
USB 4

Hardware acceleration for 8K h.264, h.264, ProRes

M3 Family discussion here:


M4 Family discussion here:

 
Last edited:

johnsonwax

Senior member
Jun 27, 2024
375
565
96
Making their own wireless chip is exactly in character for Apple. Much of their history has been trying to vertically integrate their entire product. Eventually they'll be designing all of the software and hardware in them and will move to only produce their devices in their own special Apple factories. After that those factories will only accept Apple mined aluminum. Many years later and it will truly be an Apple World. Think Borg Cube, but a much nicer, more modern design.
But they don't vertically integrate everything. Assembly, most notably. I don't think Apple intends to run their own foundry either. As as you note, they don't mine their own aluminum.

Apple vertically integrates where they can own the IP (or where they are shut out of the IP, as with cellular radios). Assembly is commodified, as is foundry to a large degree, as is mining, as is a lot of their software foundation being open source, as are their tools, and as I would think radios like BT/WiFi is. It's not like Broadcom has an unbreachable patent portfolio, it's not like there is a lot of opportunity to improve on power usage. Which is why I speculated that Apple might be interested in building out IP on that layer as they were sort of doing around UWB (even though that was built on an existing spec, there wasn't a lot of silicon in the wild).

And Apple silicon team, like most of their teams is pretty small. They don't try and in-house everything. Note their use of not-Apple hardware for their server operations until AI gave them a need that couldn't really be met by the market given their security requirements. I don't think they'd in-house local radios unless there was some kind of IP advantage to doing so, which we don't yet see.
 

jpiniero

Lifer
Oct 1, 2010
16,799
7,249
136
And Apple silicon team, like most of their teams is pretty small. They don't try and in-house everything. Note their use of not-Apple hardware for their server operations until AI gave them a need that couldn't really be met by the market given their security requirements.

Apple doesn't sell servers anymore though. The AI Servers has more to do with Wall Street than an actual need.
 

johnsonwax

Senior member
Jun 27, 2024
375
565
96
Apple doesn't sell servers anymore though. The AI Servers has more to do with Wall Street than an actual need.
Apple is one of the larger server buyers on the planet. They have over 5 million square feet of data centers. They are not a small player.

And what makes you think that the AI servers don't represent an actual need? Apple laid out pretty clearly how it will work. I don't think any of us know exactly how much demand there will be there for them, but you can't possibly say there is no need.
 
  • Like
Reactions: mikegg

jpiniero

Lifer
Oct 1, 2010
16,799
7,249
136
Apple is one of the larger server buyers on the planet. They have over 5 million square feet of data centers. They are not a small player.

Yes, but they are not using Apple products and I assume they aren't using OSX either. I believe they use HP.
 

mikegg

Golden Member
Jan 30, 2010
1,975
577
136
8% better than A18 Pro. I wonder how much performance got nerfed because of MIE implementation.

It's honestly not bad considering how it stayed on N3 family, got MIE, and added Neural Accelerators to the GPU which probably took up most of the transistor budget. Matmul acceleration in GPUs usually take up an additional 10% die space.

I think M6 generation is going to be the mother of all upgrades. I've been holding out from upgrading. My M1 Pro 16" MBP is still going strong along with my M4 Mini. My next laptop is definitely going to be M6 Max w/ maximum memory (192/256GB?) for local LLMs. Hopefully top-end LPDDR6 for 917 GB/S bandwidth. N2 node. Tandem OLED. Supposedly thinner and lighter than the current generation of MBPs. Now that'd be an excellent upgrade for M1 holdouts like me.
 
  • Like
Reactions: Mopetar and Eug

Doug S

Diamond Member
Feb 8, 2020
3,570
6,305
136
8% better than A18 Pro. I wonder how much performance got nerfed because of MIE implementation.

Personally I wouldn't care if it got SLOWER due to MIE, if that can actually live up to Apple's claims and stop the NSO type chained attacks. I've long thought given that CPUs have become something like 5 orders of magnitude faster in my life we could well afford to donate some of that performance towards better security.

We'll be able to figure out if it works based on future out of band patches from Apple for attacks "under active exploit". You can usually tell which ones are part of a chain, or that information will come out shortly after the patch is released. If it is a memory related issue and Apple's security content writeup says that it isn't applicable to A19/A19P/M5 you can be pretty certain MIE is the reason why.
 
  • Like
Reactions: ashFTW

Doug S

Diamond Member
Feb 8, 2020
3,570
6,305
136
Yes, but they are not using Apple products and I assume they aren't using OSX either. I believe they use HP.

They've been building servers with M2 Ultra and reportedly are building newer ones on N2 soon (not sure if based on existing chips or a custom die) They will always buy stuff from outside, but at the very least it makes sense to build their own AI servers instead of spending billions on Nvidia. Even if theirs aren't as good they will get a lot more bang for the buck with their own since they won't be paying Nvidia's massive margins on top of TSMC's margins.
 

mikegg

Golden Member
Jan 30, 2010
1,975
577
136
They've been building servers with M2 Ultra and reportedly are building newer ones on N2 soon (not sure if based on existing chips or a custom die) They will always buy stuff from outside, but at the very least it makes sense to build their own AI servers instead of spending billions on Nvidia. Even if theirs aren't as good they will get a lot more bang for the buck with their own since they won't be paying Nvidia's massive margins on top of TSMC's margins.
I don't doubt that Apple has great GPU designers. But Nvidia likely has even better ones since they pay so much more. Nvidia is innovating at a breakneck pace with new systems very year. Apple's design advantage is in efficiency consumer SoCs. Can Apple's internal server GPU designers keep up?

Further more, if they are designing their own training GPUs, their AI researchers can't use industry standard software libraries - forcing them to reinvent the wheel. Bringing on new AI researchers would not be as easy if they're using proprietary tech.

I think the sensible choice for Apple is to put in a massive order for Nvidia GPUs, like 2-300k GPU clusters in order to catch up. They can roll out their own server GPUs slowly.
 
  • Like
Reactions: Mopetar

fastandfurious6

Senior member
Jun 1, 2024
749
947
96
Apple is once again making it clear to everyone that they are perhaps the biggest contributor to changes in ARM specifications.

interest + bottomless money pit. Arm is simply the 'best' mobile open march and apple has unlimited money so they just invest
 

Eug

Lifer
Mar 11, 2000
24,116
1,760
126
With robust cooling, that will hit 4000 / 10000, which is just crazy for a phone.
I will be buying an A19 Pro soon. Truthfully though, I've been running an A14 Bionic now for 5 years, and it's still totally fine performance-wise for a phone.

Geekbench 6.5 for A14 Bionic is 2231 / 5375.

I suspect there will be a benchmark score increase of 75%-100% going from A14 Bionic to A19 Pro, considering that A18 Pro is already at 3593 / 9151. However, I'm only upgrading for the new camera.
3895 / 9746 represents a 75% / 81% increase in performance over my current iPhone, which still feels very quick in day-to-day usage.
 
  • Like
Reactions: Gideon
Sep 5, 2022
35
75
61
Yeah only Apple has the vast war chest and deep interest in shaping the Arm instruction set. There’s simply no other company that does what Apple does in vertically integrating every aspect of their product down to how the instructions in their processors work. Plus it doesn’t hurt that they pay Arm a lot of money for their input in how Arm as an instruction set should function.