Discussion Apple Silicon SoC thread

Page 413 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eug

Lifer
Mar 11, 2000
24,114
1,760
126
M1
5 nm
Unified memory architecture - LP-DDR4
16 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 12 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache
(Apple claims the 4 high-effiency cores alone perform like a dual-core Intel MacBook Air)

8-core iGPU (but there is a 7-core variant, likely with one inactive core)
128 execution units
Up to 24576 concurrent threads
2.6 Teraflops
82 Gigatexels/s
41 gigapixels/s

16-core neural engine
Secure Enclave
USB 4

Products:
$999 ($899 edu) 13" MacBook Air (fanless) - 18 hour video playback battery life
$699 Mac mini (with fan)
$1299 ($1199 edu) 13" MacBook Pro (with fan) - 20 hour video playback battery life

Memory options 8 GB and 16 GB. No 32 GB option (unless you go Intel).

It should be noted that the M1 chip in these three Macs is the same (aside from GPU core number). Basically, Apple is taking the same approach which these chips as they do the iPhones and iPads. Just one SKU (excluding the X variants), which is the same across all iDevices (aside from maybe slight clock speed differences occasionally).

EDIT:

Screen-Shot-2021-10-18-at-1.20.47-PM.jpg

M1 Pro 8-core CPU (6+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 16-core GPU
M1 Max 10-core CPU (8+2), 24-core GPU
M1 Max 10-core CPU (8+2), 32-core GPU

M1 Pro and M1 Max discussion here:


M1 Ultra discussion here:


M2 discussion here:


Second Generation 5 nm
Unified memory architecture - LPDDR5, up to 24 GB and 100 GB/s
20 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 16 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache

10-core iGPU (but there is an 8-core variant)
3.6 Teraflops

16-core neural engine
Secure Enclave
USB 4

Hardware acceleration for 8K h.264, h.264, ProRes

M3 Family discussion here:


M4 Family discussion here:

 
Last edited:

Mopetar

Diamond Member
Jan 31, 2011
8,486
7,723
136
Yeah only Apple has the vast war chest and deep interest in shaping the Arm instruction set. There’s simply no other company that does what Apple does in vertically integrating every aspect of their product down to how the instructions in their processors work. Plus it doesn’t hurt that they pay Arm a lot of money for their input in how Arm as an instruction set should function.

I don't think Apple cares as much about the ARM ISA as others might. They're have a license to make their own custom core and they make both the software and the hardware that runs it. They can make their own instructions without having to care if/when ARM would add them or if other software developers would use them. To them it's almost a hidden upside if other hardware can't run their code because is eliminates the hackintosh market.
 

mikegg

Golden Member
Jan 30, 2010
1,975
577
136
I don't think Apple cares as much about the ARM ISA as others might. They're have a license to make their own custom core and they make both the software and the hardware that runs it. They can make their own instructions without having to care if/when ARM would add them or if other software developers would use them. To them it's almost a hidden upside if other hardware can't run their code because is eliminates the hackintosh market.
Hackintosh is already dead, no?
 

johnsonwax

Senior member
Jun 27, 2024
375
565
96
Yes, but they are not using Apple products and I assume they aren't using OSX either. I believe they use HP.
Yeah, that's my f'ing point. They can make whatever silicon and OS they want - if 'they have the talent, they might as well use them' is the standard, why doesn't it apply there? I mean Apple did make server hardware and OS once upon a time. Until Apple decided they wanted to use AppleSilicon proprietary hardware features to enhance security, they was no IP advantage to doing that. Commodity worked fine, so just deploy Azure, which is what they did. It gives them flexibility. Same for assembly, same for mining, to some degree same for foundry.

What does Apple get out of N1 by giving up flexibility? Nobody is buying an iPhone because it has N1 in it. Where's the value extraction? Now, if they add something proprietary on top of BT to make their wearables work better now you have value. But I don't think just copying the feature set of a Broadcom chip justifies this effort. Incorporating UWB might, because that was a protocol that basically had no silicon in the wild until Apple made U1, etc. This is a move that doesn't yet make sense, and in the N2, etc. may make sense. Otherwise it's a change in approach. Apple has had a few of those of late - AVP, some of the AI stuff, etc. Is it a deterioration of Apple's business model discipline or is there an element here that we don't yet see?
 

johnsonwax

Senior member
Jun 27, 2024
375
565
96
I don't think Apple cares as much about the ARM ISA as others might. They're have a license to make their own custom core and they make both the software and the hardware that runs it. They can make their own instructions without having to care if/when ARM would add them or if other software developers would use them. To them it's almost a hidden upside if other hardware can't run their code because is eliminates the hackintosh market.
Depends on if they rely on the open source layers to support the added instructions. In the case of a security layer that relies on memory allocation, I would think in order to secure broad sections of the OS they would need to get those instructions into probably some critical GNU tools, BSD, etc. and that only happens in one of two ways: that those instructions are part of the ARM ISA proper, or Apple forks the entire project and takes full responsibility for merging any features/fixes in that they want from the original project.

This isn't some feature that sits on top of an API endpoint, it's a replacement to _malloc_. It's totally foundational. Having it in the ARM ISA means they can submit pull requests to all of those projects.
 

johnsonwax

Senior member
Jun 27, 2024
375
565
96
I don't doubt that Apple has great GPU designers. But Nvidia likely has even better ones since they pay so much more. Nvidia is innovating at a breakneck pace with new systems very year. Apple's design advantage is in efficiency consumer SoCs. Can Apple's internal server GPU designers keep up?

Further more, if they are designing their own training GPUs, their AI researchers can't use industry standard software libraries - forcing them to reinvent the wheel. Bringing on new AI researchers would not be as easy if they're using proprietary tech.

I think the sensible choice for Apple is to put in a massive order for Nvidia GPUs, like 2-300k GPU clusters in order to catch up. They can roll out their own server GPUs slowly.
I think you are misunderstanding something. The whole point of the AI hardware they are building is to offload inference from AS consumer hardware to something faster. If you want their researchers to use industry standard software libraries that are only available on Nvidia silicon, then Apple needs to put Nvidia silicon in Apple consumer products. The whole goddamn point of the exercise is to do as much of this on device as possible. If they can't, they need to accede to Nvidia silicon, then they might as well roll up the whole exercise and just cut a check to bake ChatGPT into the OS and walk away.

Your solution precludes the entire problem. Apple's problem isn't a lack of server silicon. Apple's problem is having to reinvent every aspect of the broader AI approach to work on device, understanding that ChatGPT first shipped just 2.75 years ago. This little revolution is extremely recent. This little revolution is also reliably losing a ton of money for everyone not named Nvidia, and I'm increasingly convinced Apple is going to win this simply on economic terms. They're the only player that has a somewhat workable business model on AI, because they don't expect to make any money off of it. Everyone else does, and hasn't yet figured out how. That's normally a recipe for disaster.
 

jpiniero

Lifer
Oct 1, 2010
16,799
7,249
136
Yeah, that's my f'ing point. They can make whatever silicon and OS they want - if 'they have the talent, they might as well use them' is the standard, why doesn't it apply there?

They don't want to be in the server business. Thats what I mean about Wall Street 'forcing' them to do AI servers just because of AI hype.
 

Doug S

Diamond Member
Feb 8, 2020
3,570
6,305
136
Am I the only one annoyed by the fact that "iPhone 17" models use "iPhone 18,*" SoCs? Every year I'm reminded of this strange numbering discrepancy... 😂😂😂😂
 
  • Like
Reactions: Mopetar

Doug S

Diamond Member
Feb 8, 2020
3,570
6,305
136
Yep.
Though modem/BT combos will be discrete for a while.
SoC area is getting kinda pricey these days.

Being on the same SoC as the SLC would help in some ways including by reducing the need for cache dedicated to the baseband - thus reducing its size. What's left would be all logic, fully benefiting from a smaller process. Most of the power draw for 5G reception (which is the common case, almost all of us receive far more data on our phones than we send) is the baseband, so it'll help with power use. Besides, all the Android SoCs have the modem baseband on die, so kinda hard to imagine Apple is gonna say that's too expensive.

There are also potential opportunities for sharing between cellular and wifi demodulator blocks for example (if you're OK with not being able to max out MIMO mmwave 5G at the same time you're maxing out MIMO 320 MHz wifi 7 lol) Such sharing would only be possible for a company that controls both the hardware and the software. Not sure how easy/feasible that really is, but you can bet Apple is at the very least investigating this whether or not cellular/wifi becomes part of the SoC or remains a separate die.
 
  • Like
Reactions: name99

mikegg

Golden Member
Jan 30, 2010
1,975
577
136
I think you are misunderstanding something. The whole point of the AI hardware they are building is to offload inference from AS consumer hardware to something faster.
No, I didn't miss anything. I was replying to the bolded part below - where Doug S talked about Apple not having to spend billions on Nvidia hardware which I presume would be used for training.

They will always buy stuff from outside, but at the very least it makes sense to build their own AI servers instead of spending billions on Nvidia.

If you want their researchers to use industry standard software libraries that are only available on Nvidia silicon, then Apple needs to put Nvidia silicon in Apple consumer products. The whole goddamn point of the exercise is to do as much of this on device as possible. If they can't, they need to accede to Nvidia silicon, then they might as well roll up the whole exercise and just cut a check to bake ChatGPT into the OS and walk away.
This absolutely makes no sense what's so ever. None. Apple can train on Nvidia hardware and then deploy to run on Apple Silicon consumer devices. In fact, Apple is training on Google TPUs right now. They don't need to add Google TPUs to Macbooks and iPhones.

This little revolution is also reliably losing a ton of money for everyone not named Nvidia, and I'm increasingly convinced Apple is going to win this simply on economic terms. They're the only player that has a somewhat workable business model on AI, because they don't expect to make any money off of it. Everyone else does, and hasn't yet figured out how. That's normally a recipe for disaster.
No. Part of Apple's problem is the lack of AI chips. In fact, reports came out that said the reason they fell so far behind OpenAI, Anthropic, and Google is because their CFO nixed the plan to buy 50k GPUs years ago.

Apple's AI problem spans multiple parts:

1. Apple doesn't have anything that can compete against OpenAI, Anthropic, and Google frontier models. Heck, they don't even have anything to compete against Chinese AI companies. Apple has already lost that race.

2. They can't even train a good enough model for Siri.

This little revolution is also reliably losing a ton of money for everyone not named Nvidia, and I'm increasingly convinced Apple is going to win this simply on economic terms. They're the only player that has a somewhat workable business model on AI, because they don't expect to make any money off of it. Everyone else does, and hasn't yet figured out how. That's normally a recipe for disaster.

Oracle just blew out earnings and gained 36% in one single day. Microsoft continues to make a ton of money off AI via Copilot subs and AI infrastructure. Obviously Nvidia.

OpenAI and Anthropic makes monstrous profits from inferencing. https://martinalderson.com/posts/are-openai-and-anthropic-really-losing-money-on-inference/

The reason they're making a loss is because they're in a race against each other to build ever more intelligent models. If they stopped training today, they'd be very profitable right now. The business model already works.

This is exactly what Altman said recently: "We're profitable on inference. If we didn't pay for training, we'd be a very profitable company." https://www.axios.com/2025/08/15/sam-altman-gpt5-launch-chatgpt-future

People always claim that OpenAI is losing money so therefore there is no business model for AI like it's some kind of car wash business where you have to make a profit very fast. No. This is tech. Startups always lose money in pursuit of growth. Eventually, 1 or 2 companies will emerge and dominate profits.

OpenAI said they're tripling revenue this year from $4b to $12b. Anthropic went from $1b to $5b in 6 months this year. When you're growing that fast, it's not about making a net income now. It's about fueling that growth.
 
Last edited:

Eug

Lifer
Mar 11, 2000
24,114
1,760
126
At least now we know it'll ship with iOS 25, so it's clearing up.
Haha, no. It's iOS 26, even though it's 2025 this year.

A19 Pro
New GB score!
View attachment 129941
Nice. It's a 10-15% improvement in SoC speed over A18 Pro.

Screenshot 2025-09-10 at 5.21.41 PM.png

And a 80-95% improvement in SoC speed over my current A14 Bionic.

Screenshot 2025-09-10 at 5.16.25 PM.png

Here's hoping for 4000 single-core soon. :)
 
Last edited:

Doug S

Diamond Member
Feb 8, 2020
3,570
6,305
136
So we have all the iPhone 17s with iPhone18,x code names with A19 chips running iOS 26!! What a mess

Like 10 years ago with iPhone X when people were wondering what comes next I was saying Apple should just name it for the year. At least they abandoned the 's' stuff, and 10 years later did the year based naming but for iOS rather than the iPhone. Pretty sure the codenames and SoC names don't matter to them because the typical customer doesn't see that stuff. It is just annoying for us lol

I am kind of annoyed by the iPhone Air naming too tbh. If they update it for next year will it be like iPhone SE and iPad Pro where people variously identify them as "second generation" or "2026 model" or whatever? That's just unnecessarily confusing. I can't help wondering if that naming means that maybe iPhone Air won't be updated yearly, and maybe won't even be updated on the same schedule as the other iPhones. It seems like a decent option, I don't really make use of the fancy camera on my Pro Max so I could sacrifice that (and one GPU core) in exchange for a lighter phone. Kinda wonder how it does as far as hot spots given the thin case, titanium rather than aluminum, and no vapor chamber.
 

johnsonwax

Senior member
Jun 27, 2024
375
565
96
OpenAI and Anthropic makes monstrous profits from inferencing. https://martinalderson.com/posts/are-openai-and-anthropic-really-losing-money-on-inference/

The reason they're making a loss is because they're in a race against each other to build ever more intelligent models. If they stopped training today, they'd be very profitable right now. The business model already works.

This is exactly what Altman said recently: "We're profitable on inference. If we didn't pay for training, we'd be a very profitable company." https://www.axios.com/2025/08/15/sam-altman-gpt5-launch-chatgpt-future
And TSMC would be even more profitable if they no longer had to build fabs. But that's now how the economy works.
 

johnsonwax

Senior member
Jun 27, 2024
375
565
96
Interesting
The SPR AVS protocol—introduced with the iPhone 17 lineup—is Apple’s new proprietary wireless standard designed to optimize audio, video, and sensor data transmission across devices.
There is something very confusing here. SPR AVS (Standard Power Range Adjustable Voltage Supply) - is a new feature of USB PD 3.2 introduced at the end of 2024 and available in the new iPhones.

Adjustable Voltage Supply (AVS):
  • AVS (Adjustable Voltage Supply) is a power supply mode introduced in USB PD 3.1/3.2, it allows the power supply (Source) to dynamically adjust the output voltage within a specific range. Unlike earlier fixed levels of 5V, 9V, 15V, and 20V, precise output voltage control can better meet safety regulations and energy efficiency certification requirements. This is particularly useful in application scenarios that require precise voltage control (such as laptops, portable devices, and fast charging systems).
  • The AVS test steps have been updated in the USB PD CTS version in the third quarter of 2024. The USB PD 3.2 specification requires for all products with a power supply capacity exceeding 27W to support the SPR AVS mode.
  • SPR AVS voltage range:
    • 9V to 15V is suitable for SPR Sources of up to 45W.
    • 9V to 20V is suitable for SPR Sources over 45W.
  • The adjustable voltage can be calibrated in 100mV increments.

You're telling us there's a 2nd SPR AVS being introduced at the same time that is a new wireless standard? And why is it tightly integrated with C1X and not N1. It's offered as a Bluetooth alternative, and therefore a local protocol. Why on earth would it be handled by C1X?

Don't get me wrong, a new protocol is exactly what I'm looking for to justify the N1s existence, but the likelihood that Apple is embracing two identical 6-letter acronyms that refer to completely different things is hard to swallow. I think that wireless protocol is some AI garbage.