• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Discussion Apple Silicon SoC thread

Page 472 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eug

Lifer
M1
5 nm
Unified memory architecture - LP-DDR4
16 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 12 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache
(Apple claims the 4 high-effiency cores alone perform like a dual-core Intel MacBook Air)

8-core iGPU (but there is a 7-core variant, likely with one inactive core)
128 execution units
Up to 24576 concurrent threads
2.6 Teraflops
82 Gigatexels/s
41 gigapixels/s

16-core neural engine
Secure Enclave
USB 4

Products:
$999 ($899 edu) 13" MacBook Air (fanless) - 18 hour video playback battery life
$699 Mac mini (with fan)
$1299 ($1199 edu) 13" MacBook Pro (with fan) - 20 hour video playback battery life

Memory options 8 GB and 16 GB. No 32 GB option (unless you go Intel).

It should be noted that the M1 chip in these three Macs is the same (aside from GPU core number). Basically, Apple is taking the same approach which these chips as they do the iPhones and iPads. Just one SKU (excluding the X variants), which is the same across all iDevices (aside from maybe slight clock speed differences occasionally).

EDIT:

Screen-Shot-2021-10-18-at-1.20.47-PM.jpg

M1 Pro 8-core CPU (6+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 16-core GPU
M1 Max 10-core CPU (8+2), 24-core GPU
M1 Max 10-core CPU (8+2), 32-core GPU

M1 Pro and M1 Max discussion here:


M1 Ultra discussion here:


M2 discussion here:


Second Generation 5 nm
Unified memory architecture - LPDDR5, up to 24 GB and 100 GB/s
20 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 16 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache

10-core iGPU (but there is an 8-core variant)
3.6 Teraflops

16-core neural engine
Secure Enclave
USB 4

Hardware acceleration for 8K h.264, h.264, ProRes

M3 Family discussion here:


M4 Family discussion here:


M5 Family discussion here:

 
Last edited:
Why are you guys so eager to fall for nonsense?
Where did the claim come from? WTF knows? The world is full of clueless idiots who range from making stuff up to simply not understanding what their tech friends tell them. Chinese writing is the new English accent -- Americans think anyone using it has to be smart!

Maynard, you are a very smart guy, but it would behoove you to learn to disagree with people without veering into personal attacks. It would be more polite, and have less of a tendency to make you look foolish when you get it wrong (which you appear to have in this case.)
 
Apple's SSD architecture should be perfectly possible to do with standard raw NAND dies: controller is in SoC, encryption is in SoC, at that point there is no harm with communicating with the NAND dies the same way standard SSD controllers do it.
Yet Apple reinvented wheels and uses those proprietary NVMe but not quite interfaces instead of the standard NAND device interface.
There's a factor you've totally failed to account for: pin count. ONFI and Toggle, the standard (not-really-standards) interfaces in question, are parallel interfaces at low-ish clock rates, so each NAND controller channel costs a bunch of pins. If you want to build a high performance SSD, you need lots of channels.

That's not a problem for standalone SSD controllers, because that's the entire function of the chip. It is a problem for Apple, because they're building a giant many-function SoC. Its baseline even without the SSD is a ton of pins.

What do chip designers do when they don't have enough pins? They use external pin expander silicon. Clock serial data in/out of the expander at a higher clock speed than the expanded parallel data, and you can reduce the pin count needed for that function.

Apple chose PCIe as the serial interface. That doesn't mean what they built is "NVMe but not quite". As far as I know, there isn't a full flash translation layer out there, so even if Apple published full specs, you could not use one as a SSD on its own. It would need something else to virtualize logical block addresses, wear level, provide a NVMe compatible register interface, and so forth. (Which is to say, the NVMe controller inside the Apple Silicon SoC.)

The other thing which torpedoes the "petty" accusations is that some time around 2011 Apple acquihired a SSD controller startup and set them to work designing a SSD controller IP around Apple's requirements. They shipped it in iPhones many, many years before they carried it over to Apple Silicon for Macs. To Apple, the easy, low-friction path to getting where they wanted to go (unifying features across their whole line) was to scale their preexisting SSD architecture up.
 
You can have an M2 socket for a NAND board without any NVME controller, as seen with the Mac Mini.
That is not a M.2 compatible socket. Nor should it be, given that plugging a NVMe M.2 in (or attempting to plug Apple flash modules into other systems) would only lead to disappointment, given what I've related above about how Apple's PCIe to NAND bridges work.
 
  • Like
Reactions: Eug
That is not a M.2 compatible socket. Nor should it be, given that plugging a NVMe M.2 in (or attempting to plug Apple flash modules into other systems) would only lead to disappointment, given what I've related above about how Apple's PCIe to NAND bridges work.
M.2 is the name of the slot. Like coercitiv said, you can have an M.2 slot that isn't NVMe. A lot of cheaper M.2 SSDs are/were SATA.

 
M.2 is the name of the slot. Like coercitiv said, you can have an M.2 slot that isn't NVMe. A lot of cheaper M.2 SSDs are/were SATA.
M.2 is literally not the name of the slot in the M4 Mini, or other Apple Silicon Macs that put the flash on a small card. People just assume it's M.2 based on superficial similarity (small card, edge connector, has flash), but the pinout's different, the locations of contacts on the memory card edge are different (Apple's are inset more and they also have a second row of contacts for shield ground), keying is different, and probably even the width is different.
 
I think most don’t have an issue with whatever protocol or solutions Apple uses. The problem for all people is the cost the SSD upgrades.

It’s somewhat better now with the M5 series higher base storage. Now that there will be discounts at retailers for those stock configs.
 
M.2 is literally not the name of the slot in the M4 Mini, or other Apple Silicon Macs that put the flash on a small card. People just assume it's M.2 based on superficial similarity (small card, edge connector, has flash), but the pinout's different, the locations of contacts on the memory card edge are different (Apple's are inset more and they also have a second row of contacts for shield ground), keying is different, and probably even the width is different.
Yeah, definitely not M.2.

m4-ssd-upgrade-nvme-comparison.jpg

BTW, the older Macs that could use M.2 drives weren't actually M.2 either, even though they were NVMe compatible (through an adapter).

Screenshot 2026-03-06 at 10.01.10 PM.png
 
AFAIK, nobody sells raw NAND in m.2 format or otherwise to end users. Second, there are some pretty significant "under the hood" differences in NAND from different vendors and different product generations from the same vendor. All that stuff about number of dies per deck, number of decks, size of erase blocks and so forth MATTER.
Raw NAND is NAND dies, you would be soldering them on, usually at some specialised place with expensive tools, definitely not user friendly.
Not on M.2 modules. And yeah, the modules Apple uses are non standard and are not what is needed. That's not really relevant to the point I was making.
NAND chips don't "advertise" that stuff the way DIMMs do with XMP and so forth, because there is no need since NAND chips are always paired with controllers that have been preprogrammed to handle the NAND chips they're using. They must have some sort of a way for the controller to interrogate them, but it is probably giving some sort of PCI ID like identifier that the controller can match on with its list of preprogrammed settings, rather than an exhaustive list of everything the controller needs the way DIMMs do.

Theoretically it could work if Apple produced a list of manufacturer/part number/spec combos to insure people bought one of the NAND chips the controller knows how to deal with, but that's pretty clunky and un Apple like to put that onus on the consumer.

So what they'd really need would be for a market for raw NAND to appear, along with a protocol that advertised its full specs so that controllers wouldn't be forced to only work with NAND they'd been previously programmed to work with. That's probably not going to happen unless Intel or AMD decides to integrate an SSD controller on their CPU dies.


There's a factor you've totally failed to account for: pin count. ONFI and Toggle, the standard (not-really-standards) interfaces in question, are parallel interfaces at low-ish clock rates, so each NAND controller channel costs a bunch of pins. If you want to build a high performance SSD, you need lots of channels.

That's your theory that the SoCs don't have enough pins going out. But what makes you so certain there is no space for enough pins? Apple already saves a lot of pins by having memory on package. They have relatively little connectivity. Savings from using non-custom NAND dies likely give modest ability to actually add some pins if needed.

Heck, even the power supply needs of Apple SoCs are lower so they don't need as much pads for power and ground.

(Edit: And I should add this - you keep piling on arguments "why things can't be done". Usually it turns out things can be done if people want them to happen. In this case, Apple most likely decided that they don't want people to enjoy easily replaceable storage drives, and that was it. If they were generous or felt they have to be consumer friendly on this front, they would design it accordingly and we wouldn't have this pointless theory-crafting debate. And nobody would probably come and say "OMG I thought this was impossible".)

That's not a problem for standalone SSD controllers, because that's the entire function of the chip. It is a problem for Apple, because they're building a giant many-function SoC. Its baseline even without the SSD is a ton of pins.

What do chip designers do when they don't have enough pins? They use external pin expander silicon. Clock serial data in/out of the expander at a higher clock speed than the expanded parallel data, and you can reduce the pin count needed for that function.

Apple chose PCIe as the serial interface. That doesn't mean what they built is "NVMe but not quite". As far as I know, there isn't a full flash translation layer out there, so even if Apple published full specs, you could not use one as a SSD on its own. It would need something else to virtualize logical block addresses, wear level, provide a NVMe compatible register interface, and so forth. (Which is to say, the NVMe controller inside the Apple Silicon SoC.)


The other thing which torpedoes the "petty" accusations is that some time around 2011 Apple acquihired a SSD controller startup and set them to work designing a SSD controller IP around Apple's requirements. They shipped it in iPhones many, many years before they carried it over to Apple Silicon for Macs. To Apple, the easy, low-friction path to getting where they wanted to go (unifying features across their whole line) was to scale their preexisting SSD architecture up.
Yeah that doesn't really say anything. Nobody even knows how good that tech was and if it was any better than other SSD controller teams that got acquired at that time. Phison and SMI still overtook large part of the controller market which suggests lots of those promising teams likely got eclipsed. Who remembers SandForce, once the SSD star? Or the guys that got acquired by OCZ (choose poorly)?
Yet this company and their never-demonstrated-before-acquisition technology is retroactively considered magic because Apple bought them*. It's one of those times Apple fans should think more critically and accept that some things are unknowns.

* this sort of "magic" arguments were used a few years back to argue that NAND in Apple devices inherently has to have longer rewrite endurance because Apple has to be best at SSDs and other companies know nothing.
I recall one debate like that, the person claiming those things was revealed to not even know about LDPC.
 
Last edited:
Raw NAND is NAND dies, you would be soldering them on, usually at some specialised place with expensive tools, definitely not user friendly.
Not on M.2 modules. And yeah, the modules Apple uses are non standard and are not what is needed. That's not really relevant to the point I was making.





That's your theory that the SoCs don't have enough pins going out. But what makes you so certain there is no space for enough pins? Apple already saves a lot of pins by having memory on package. They have relatively little connectivity. Savings from using non-custom NAND dies likely give modest ability to actually add some pins if needed.

Heck, even the power supply needs of Apple SoCs are lower so they don't need as much pads for power and ground.

(Edit: And I should add this - you keep piling on arguments "why things can't be done". Usually it turns out things can be done if people want them to happen. In this case, Apple most likely decided that they don't want people to enjoy easily replaceable storage drives, and that was it. If they were generous or felt they have to be consumer friendly on this front, they would design it accordingly and we wouldn't have this pointless theory-crafting debate. And nobody would probably come and say "OMG I thought this was impossible".)





Yeah that doesn't really say anything. Nobody even knows how good that tech was and if it was any better than other SSD controller teams that got acquired at that time. Phison and SMI still overtook large part of the controller market which suggests lots of those promising teams likely got eclipsed. Who remembers SandForce, once the SSD star? Or the guys that got acquired by OCZ (choose poorly)?
Yet this company and their never-demonstrated-before-acquisition technology is retroactively considered magic because Apple bought them*. It's one of those times Apple fans should think more critically and accept that some things are unknowns.

* this sort of "magic" arguments were used a few years back to argue that NAND in Apple devices inherently has to have longer rewrite endurance because Apple has to be best at SSDs and other companies know nothing.
I recall one debate like that, the person claiming those things was revealed to not even know about LDPC.
No one publicly knows that they are better, no one publicly knows that they are not. So there is no evidence to factually say anything, including that they only did it to make their users pay more. Feelings versus feelings.
 
No one publicly knows that they are better, no one publicly knows that they are not. So there is no evidence to factually say anything,
Agreed.

including that they only did it to make their users pay more. Feelings versus feelings.

Well, the first part is technical, while this second part is company politics and strategies, so I can't agree here. You can't easily predict success of technology (especially when it's just vaporware at the time) because sometimes projects fail or don't hit targets, and you never know how far removed the public-facing claims are from reality.

But company strategies like "upcharge a lot on NAND/DRAM" are predictable. And well, you can also empirically observe them in action.
 
The removal of the 512 GB option for the old Mac Studio coincides with the introduction of the new M5 Pro/Max MacBook Pros. Those start at 1 TB now (at a higher base price too).

What was the "512 GB option" they removed? I assumed it was RAM, and I see M3 Ultra is "configurable to 256 GB". Did it used to be "configurable to 512 GB", because that's what I assumed was meant.

The M3 Ultra Studio is shown as shipping with 1 TB NAND, configurable to up to 16 TB. Did it previously ship with 512 GB and now it ships with 1 TB and that's what you're talking about? Because if so that's a nothingburger as far as I'm concerned. That would be the removal of an option zero people are ordering lol
 
  • Like
Reactions: Eug
That's your theory that the SoCs don't have enough pins going out. But what makes you so certain there is no space for enough pins? Apple already saves a lot of pins by having memory on package. They have relatively little connectivity. Savings from using non-custom NAND dies likely give modest ability to actually add some pins if needed.


He's talking about pins on the SoC itself. Having RAM on package doesn't save any pins at all on the SoC - they have a ton of pins for all those LPDDR channels. The controller on the SoC needs to interface with the NAND, so they can either use PCIe to "compress" all that slow data on lots of pins or they'd have to add a lot more pins to an already pin heavy SoC.

You're working really hard to criticize Apple here over decisions that make a lot of sense. Now could they modularize the NAND in more of the Mac line and allow people to upgrade storage after purchase? Sure, but I think most people would much rather have a way to upgrade RAM than NAND.

Using LPDDR there hasn't been a choice about soldering down the memory. Now there are standards like LPCAMM2 that allow LPDDR to become modular. They aren't well adopted yet and those modules cost about as much as Apple charges for upgrades, but as they get supported more widely in the PC industry prices will come down. I think we'll see their impact with LPDDR6, adoption with LPDDR5X is likely to be fairly minimal.

So maybe when Apple moves to LPDDR6 in a few years you'll have something to actually complain about in a fair way, if they continue to solder memory in everything despite the existance of a standard that would permit making it more modular and allow post sale upgrades.
 
What was the "512 GB option" they removed? I assumed it was RAM, and I see M3 Ultra is "configurable to 256 GB". Did it used to be "configurable to 512 GB", because that's what I assumed was meant.

The M3 Ultra Studio is shown as shipping with 1 TB NAND, configurable to up to 16 TB. Did it previously ship with 512 GB and now it ships with 1 TB and that's what you're talking about? Because if so that's a nothingburger as far as I'm concerned. That would be the removal of an option zero people are ordering lol

There was a 512GB RAM option. It is not longer available, and the 256GB option had a price increase. Eug is mixing that up with flash, I think.
 
  • Like
Reactions: Eug
15" MB Air is a better deal imo. Yeah it's $200 more but you get the 10 core GPU at default and a larger display (I find 13" limiting). Plus also slightly better thermals.
 
Maynard, you are a very smart guy, but it would behoove you to learn to disagree with people without veering into personal attacks. It would be more polite, and have less of a tendency to make you look foolish when you get it wrong (which you appear to have in this case.)
Did I get it wrong? We'll see won't we.
Or do you have definitive evidence right now?
 
That's your theory that the SoCs don't have enough pins going out. But what makes you so certain there is no space for enough pins?
I'm sure they could enlarge the package. The question is, would they want to? M-series SoCs have quite large packages. Plain M1 was a 2502 ball package with a terrifyingly fine 0.45mm pitch.


And once again, this is literally scaled up iPhone SSD tech. Think about it, stop jerking your knee. Space is at an incredible premium in phones.

Heck, even the power supply needs of Apple SoCs are lower so they don't need as much pads for power and ground.
They may do things a lot more efficiently than PC chips, but the top end is still high. I've observed M4 Max (in the 16" MBP platform) reporting itself using over 100W (read out via Apple's powermetrics command). They need plenty of power pins too.

(Edit: And I should add this - you keep piling on arguments "why things can't be done". Usually it turns out things can be done if people want them to happen. In this case, Apple most likely decided that they don't want people to enjoy easily replaceable storage drives, and that was it.
Apple-level boot/FDE security is definitely impossible with standard NVMe TCG Opal, but for the rest, you're completely mischaracterizing what I've said. Obviously, if security wasn't a concern, it would be very possible for them to just use an off the shelf NVMe stick.

You seem utterly uninterested in seriously considering whether their decision is understandable from any angle other than your preconceived idea that it must've been pure anti-consumerism, and keep dismissing things that you shouldn't because you're so prejudiced.

Yeah that doesn't really say anything. Nobody even knows how good that tech was and if it was any better than other SSD controller teams that got acquired at that time. Phison and SMI still overtook large part of the controller market which suggests lots of those promising teams likely got eclipsed. Who remembers SandForce, once the SSD star? Or the guys that got acquired by OCZ (choose poorly)?
Yet this company and their never-demonstrated-before-acquisition technology is retroactively considered magic because Apple bought them*. It's one of those times Apple fans should think more critically and accept that some things are unknowns.
Yeah that's all just a bunch of garbage which is totally unresponsive to anything I actually wrote. I said absolutely nothing about the technical merits of the company Apple bought, I just related that Apple decided to do SSD controllers in-house for the iPhone long ago, acquired a company to do it, and did it. Since that effort was successful enough to stick around long term, by the time they decided to make Apple Silicon Macs, it would've been the easiest path to scale it up for Mac rather than replace it with something entirely different.

* this sort of "magic" arguments were used a few years back to argue that NAND in Apple devices inherently has to have longer rewrite endurance because Apple has to be best at SSDs and other companies know nothing.
I recall one debate like that, the person claiming those things was revealed to not even know about LDPC.
Well, you're not talking to that person. You're talking to someone who's never invoked "magic", and you're talking to someone who's actually worked on FPGA implementation of a particular LDPC algorithm in a past job (for 100G networking, not SSD, but whatever).

I would appreciate it if you stopped responding to the fantasy clueless Mac fanboy you think you're crushing with your weak forum troll style argumentation, and instead addressed me, the actual person you're talking to, and the actual things I"m saying.
 
Back
Top