Discussion Apple Silicon SoC thread

Page 415 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eug

Lifer
Mar 11, 2000
24,114
1,760
126
M1
5 nm
Unified memory architecture - LP-DDR4
16 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 12 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache
(Apple claims the 4 high-effiency cores alone perform like a dual-core Intel MacBook Air)

8-core iGPU (but there is a 7-core variant, likely with one inactive core)
128 execution units
Up to 24576 concurrent threads
2.6 Teraflops
82 Gigatexels/s
41 gigapixels/s

16-core neural engine
Secure Enclave
USB 4

Products:
$999 ($899 edu) 13" MacBook Air (fanless) - 18 hour video playback battery life
$699 Mac mini (with fan)
$1299 ($1199 edu) 13" MacBook Pro (with fan) - 20 hour video playback battery life

Memory options 8 GB and 16 GB. No 32 GB option (unless you go Intel).

It should be noted that the M1 chip in these three Macs is the same (aside from GPU core number). Basically, Apple is taking the same approach which these chips as they do the iPhones and iPads. Just one SKU (excluding the X variants), which is the same across all iDevices (aside from maybe slight clock speed differences occasionally).

EDIT:

Screen-Shot-2021-10-18-at-1.20.47-PM.jpg

M1 Pro 8-core CPU (6+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 16-core GPU
M1 Max 10-core CPU (8+2), 24-core GPU
M1 Max 10-core CPU (8+2), 32-core GPU

M1 Pro and M1 Max discussion here:


M1 Ultra discussion here:


M2 discussion here:


Second Generation 5 nm
Unified memory architecture - LPDDR5, up to 24 GB and 100 GB/s
20 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 16 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache

10-core iGPU (but there is an 8-core variant)
3.6 Teraflops

16-core neural engine
Secure Enclave
USB 4

Hardware acceleration for 8K h.264, h.264, ProRes

M3 Family discussion here:


M4 Family discussion here:

 
Last edited:

mikegg

Golden Member
Jan 30, 2010
1,975
577
136
Except that TSMC does make enough money to pay for their infrastructure. AI doesn't. You can't just say 'if we ignore the infrastructure we're profitable'. Good christ man.
Once again, LLMs are not a car wash business. It doesn't have to make a profit immediately because growth is explosive.

Do you understand growth vs value companies?

Here are a list of companies that lost a ton of money in the first years but later became extremely profitable:
  • Nvidia (nearly went bankrupt 3x)
  • Google
  • Meta
  • Netflix
  • Amazon (investing in speed and scale)
  • Uber (rapid expansion)
  • Starbucks (rapid expansion)
  • Costco (low margin business that needed scale)
  • Ford (burned a ton of money creating assembly line)
  • Boeing (nearly bankrupt investing in 747)
  • Salesforce (lost money for a decade before SaaS subscriptions took over)
It can take 5-10 years of money burn for growth companies before becoming immensely profitable. It's been less than 3 years since ChatGPT came out. Let's not try to be a genius saying they have no business model - especially when they're growing revenue so fast.

Good christ man. Learn a tiny bit about growth companies first. Just a small bit. Miniscule. I'm sure you would have been the type who said these companies don't have a business model when they were losing money and that they should all close down and return their money to investors. 🤦‍♂️🤦‍♂️
 
Last edited:
  • Haha
Reactions: MuddySeal

Eug

Lifer
Mar 11, 2000
24,114
1,760
126
Store opened at 8 am ET, I got in at 8:06, completed order by 8:09, but by that time my Cosmic Orange 256 GB 17 Pro Max shipping times were already pushed back a week to the end of September. Most other models are apparently still arriving on launch day, Sept. 19.

It would seem the Orange 17 Pro Max is the hot ticket this year. As of right now (8:34), the 17 Pro Max has been pushed back even further to mid-October, whereas the 17 Pro is still September 19.
 

jpiniero

Lifer
Oct 1, 2010
16,799
7,249
136
Store opened at 8 am ET, I got in at 8:06, completed order by 8:09, but by that time my Cosmic Orange 256 GB 17 Pro Max shipping times were already pushed back a week to the end of September. Most other models are apparently still arriving on launch day, Sept. 19.

It would seem the Orange 17 Pro Max is the hot ticket this year. As of right now (8:34), the 17 Pro Max has been pushed back even further to mid-October, whereas the 17 Pro is still September 19.

From reading the Macrumors thread, perhaps try doing in store pickup?
 

Eug

Lifer
Mar 11, 2000
24,114
1,760
126
From reading the Macrumors thread, perhaps try doing in store pickup?
This was from the same time as when I posted my previous message:

Screenshot 2025-09-12 at 8.38.52 AM.png

Anyhow, I can wait the extra week or so for delivery on my existing order. I'll survive. ;)
 

ashFTW

Senior member
Sep 21, 2020
325
247
126
Store opened at 8 am ET, I got in at 8:06, completed order by 8:09, but by that time my Cosmic Orange 256 GB 17 Pro Max shipping times were already pushed back a week to the end of September. Most other models are apparently still arriving on launch day, Sept. 19.

It would seem the Orange 17 Pro Max is the hot ticket this year. As of right now (8:34), the 17 Pro Max has been pushed back even further to mid-October, whereas the 17 Pro is still September 19.
I ordered the base orange PM at 5:02 PT and got 9/25.
 
  • Wow
Reactions: Eug

Eug

Lifer
Mar 11, 2000
24,114
1,760
126
I ordered the base orange PM at 5:02 PT and got 9/25.
Yeah, my base orange PM is supposedly arriving Sept. 25 - Oct. 1. My order was completed 7 mins after yours. However, this is in Canada, and shipping dates will vary based on location of course.
 

ashFTW

Senior member
Sep 21, 2020
325
247
126
Yeah, my base orange PM is supposedly arriving Sept. 25 - Oct. 1. My order was completed 7 mins after yours. However, this is in Canada, and shipping dates will vary based on location of course.
I’m hoping 🤞it will arrive a bit sooner so I get it before some travel coming up. Orange seems to the most popular. I first did a pre-order for silver with black case, but later decided to change it up this year instead the usual silver/grey/black understated colors I usually get. I also got a matching orange strap with the new  WU3 in black.
 

johnsonwax

Senior member
Jun 27, 2024
375
565
96
Once again, LLMs are not a car wash business. It doesn't have to make a profit immediately because growth is explosive.

Do you understand growth vs value companies?
Buddy, I retired early because I made millions understanding that.

But even growth companies have a business model that you can see 'once we climb the adoption curve we'll be profitable'. AI has already claimed the adoption curve, though. One of the notable things about it is that they climbed that curve almost immediately.

Here are a list of companies that lost a ton of money in the first years but later became extremely profitable:
  • Nvidia (nearly went bankrupt 3x)
  • Google
  • Meta
  • Netflix
  • Amazon (investing in speed and scale)
  • Uber (rapid expansion)
  • Starbucks (rapid expansion)
  • Costco (low margin business that needed scale)
  • Ford (burned a ton of money creating assembly line)
  • Boeing (nearly bankrupt investing in 747)
  • Salesforce (lost money for a decade before SaaS subscriptions took over)
It can take 5-10 years of money burn for growth companies before becoming immensely profitable. It's been less than 3 years since ChatGPT came out. Let's not try to be a genius saying they have no business model - especially when they're growing revenue so fast.

Good christ man. Learn a tiny bit about growth companies first. Just a small bit. Miniscule. I'm sure you would have been the type who said these companies don't have a business model when they were losing money and that they should all close down and return their money to investors. 🤦‍♂️🤦‍♂️
Yes, it's been less than 3 years, but they have scale. They have 750 million weekly active users. The whole point of these companies has been that they would build value into their cost, and that would bring in more users. You build value through training - the thing they are struggling to justify - by your own admission. And we are seeing strongly diminishing returns on those investments. ChatGPT 4.5 was a disappointment. 5 isn't any better. The cost to train these is expansionary, but the value has diminishing returns. That's an economic trap. Go look at the cost per transistor from the foundries thread and you see an industry where, despite the expansionary cost of foundries, you have a falling cost per transistor because each new node delivered so much value that you stayed ahead of that cost trend (that's why I say Moores law is an economic law, because in that case the problem to be solved to keep the engine going is finding the scale to afford capex, the marginal cost is trivial). That's the exact same problem that AI is facing - their training costs are enormous and somewhat unknown depending on what happens with royalty/IP lawsuits - but we're not seeing the returns from those investments that the first few generations of these models promised. They know how to price inference, but they don't know how to also cover training. That's the failure of the business model. And nobody has solved that. Anthropic tried by hiking prices hoping that companies hiring their developer tools would pay it, and that failed - it didn't deliver enough value, it was too expensive. These reasoning models increase your token costs sometimes by two orders of magnitude. And if you are building general knowledge models, you can't stop training them because they get stale and useless.

I'm bullish on two approaches:
1) Apples, by distributing the inference costs to the customers themselves - we own the infrastructure and cost to operate it. And by narrowing the expectation of the models to something that is manageable to train. They haven't successfully executed on that, but I think they understood what would work as a business model for them.
2) The expert systems - the drug discovery tools, the materials characterization tools, etc. The stuff that sits inside a company, is trained on the academic information base as well as internal proprietary data and seeks to make your PhDs more productive. These are mainly open source now from what I can tell and they are VERY modest in terms of what they do and seek to do.

The problem with the broader AI approach is that if you think you're going to sell marketing companies on this stuff, the tools are going to spit out such generic stuff that the value will get bled out of it. And once the value is bled out, who is paying? Most of the big AI players believe (without evidence) that they are chasing AGI, and AGI is an economy destroying machine. But their investors believe that these tools will lead to payroll-less super-profitable industry leaders. Nobody is considering that has massive negative macroeconomic consequences. Sure, it's great in the weighing machine game of the stock market, and it's catastrophic in the measuring machine game of economy. And that assumes they can get there, and the last year has been nothing but evidence they are mostly played out in that effort. Their value/cost exercise should be continuing - as you saw in all of your examples, and we haven't seen any of that in the last 12 months. The last 12 months has been that exercise getting much worse.

Exactly six months ago, Dario Amodei, the CEO of massive AI company Anthropic, claimed that in half a year, AI would be "writing 90 percent of code." And that was the worst-case scenario; in just three months, he predicted, we could hit a place where "essentially all" code is written by AI.
This is one of the two biggest leaders in the industry, and his 3 month out predictions are complete garbage. That's the same scale as the normative quarterly guidance that every CEO has to provide. These people have no f'ing idea what they're doing and I hold Anthropic in higher regard than any of them.
 

name99

Senior member
Sep 11, 2010
652
545
136
This is niche enough that I'm pretty sure Apple wouldn't bother.


I already do this in CarPlay with Spotify. Spotify on my iOS 26 iPhone 12 Pro Max plays through my car's CarPlay but my daughter's iPhone 16e controls the actual song playback via Spotify Queue and Jam.

Furthermore, my 2025 Toyota Camry supports 2 connected Bluetooth devices so I can use navigation software from my phone while she plays Bluetooth music from hers.
The core issue here is you're using BT, with BT audio quality, not Airplay WiFi audio quality.
May not matter to you, but it's a factor.
 

name99

Senior member
Sep 11, 2010
652
545
136
A19
GPU AI Score
View attachment 130098

A19 vs A18 Pro GPU

View attachment 130099
Seems like we're seeing the effect of 2x FP16 HW in the GPU (basically achieved by having the FP32 pipe now also support FP16. I have the patent but still need to read it carefully).

My guess is that the GPU quantized result is very much a lower bound and may double when all the elements are in place (updated Metal APIs, runtime, and GB6 recompiled to support those). Right now GB6 may be doing the quantization ON GPU using something like INT8 rather than a direct INT4 or whatever it is that Apple promised gets us 3 to 4x.
 

Eug

Lifer
Mar 11, 2000
24,114
1,760
126
The core issue here is you're using BT, with BT audio quality, not Airplay WiFi audio quality.
May not matter to you, but it's a factor.
AFAIK, there are no cars in existence that support AirPlay, and nobody is asking for it either.
 

name99

Senior member
Sep 11, 2010
652
545
136
Buddy, I retired early because I made millions understanding that.

But even growth companies have a business model that you can see 'once we climb the adoption curve we'll be profitable'. AI has already claimed the adoption curve, though. One of the notable things about it is that they climbed that curve almost immediately.


Yes, it's been less than 3 years, but they have scale. They have 750 million weekly active users. The whole point of these companies has been that they would build value into their cost, and that would bring in more users. You build value through training - the thing they are struggling to justify - by your own admission. And we are seeing strongly diminishing returns on those investments. ChatGPT 4.5 was a disappointment. 5 isn't any better. The cost to train these is expansionary, but the value has diminishing returns. That's an economic trap. Go look at the cost per transistor from the foundries thread and you see an industry where, despite the expansionary cost of foundries, you have a falling cost per transistor because each new node delivered so much value that you stayed ahead of that cost trend (that's why I say Moores law is an economic law, because in that case the problem to be solved to keep the engine going is finding the scale to afford capex, the marginal cost is trivial). That's the exact same problem that AI is facing - their training costs are enormous and somewhat unknown depending on what happens with royalty/IP lawsuits - but we're not seeing the returns from those investments that the first few generations of these models promised. They know how to price inference, but they don't know how to also cover training. That's the failure of the business model. And nobody has solved that. Anthropic tried by hiking prices hoping that companies hiring their developer tools would pay it, and that failed - it didn't deliver enough value, it was too expensive. These reasoning models increase your token costs sometimes by two orders of magnitude. And if you are building general knowledge models, you can't stop training them because they get stale and useless.

I'm bullish on two approaches:
1) Apples, by distributing the inference costs to the customers themselves - we own the infrastructure and cost to operate it. And by narrowing the expectation of the models to something that is manageable to train. They haven't successfully executed on that, but I think they understood what would work as a business model for them.
2) The expert systems - the drug discovery tools, the materials characterization tools, etc. The stuff that sits inside a company, is trained on the academic information base as well as internal proprietary data and seeks to make your PhDs more productive. These are mainly open source now from what I can tell and they are VERY modest in terms of what they do and seek to do.

The problem with the broader AI approach is that if you think you're going to sell marketing companies on this stuff, the tools are going to spit out such generic stuff that the value will get bled out of it. And once the value is bled out, who is paying? Most of the big AI players believe (without evidence) that they are chasing AGI, and AGI is an economy destroying machine. But their investors believe that these tools will lead to payroll-less super-profitable industry leaders. Nobody is considering that has massive negative macroeconomic consequences. Sure, it's great in the weighing machine game of the stock market, and it's catastrophic in the measuring machine game of economy. And that assumes they can get there, and the last year has been nothing but evidence they are mostly played out in that effort. Their value/cost exercise should be continuing - as you saw in all of your examples, and we haven't seen any of that in the last 12 months. The last 12 months has been that exercise getting much worse.


This is one of the two biggest leaders in the industry, and his 3 month out predictions are complete garbage. That's the same scale as the normative quarterly guidance that every CEO has to provide. These people have no f'ing idea what they're doing and I hold Anthropic in higher regard than any of them.
I agree with the elements that go into your analysis, not necessarily with the conclusion.
In particular I think we've been seeing waves of pushing an idea as far as it will go (perhaps too far) just because each improvement was so amazing.
So we pushed LLMs perhaps too far (too much data stored in the weights, relative to using RAG and reasoning).
Now we've pushed "reasoning" too far relative to rethinking more "structured" ways of getting to that goal rather than just building on what the LLM provides.

So generically I see a future where "we" (ie the leading parts of the AI industry) dial back both these knobs and start exploring other directions. One of these is alternative ways to reason, but another is VIDEO-augmented training (ie [somewhat] "grounded" training).

For my thoughts on this in particular check out the end of
https://www.realworldtech.com/forum/?threadid=225114&curpostid=225133
 

poke01

Diamond Member
Mar 8, 2022
4,196
5,542
106
Seems like we're seeing the effect of 2x FP16 HW in the GPU (basically achieved by having the FP32 pipe now also support FP16. I have the patent but still need to read it carefully).

My guess is that the GPU quantized result is very much a lower bound and may double when all the elements are in place (updated Metal APIs, runtime, and GB6 recompiled to support those). Right now GB6 may be doing the quantization ON GPU using something like INT8 rather than a direct INT4 or whatever it is that Apple promised gets us 3 to 4x.
What about those new AI cores? Are they helping?
 

Doug S

Diamond Member
Feb 8, 2020
3,570
6,305
136
4000 and 11000! Somebody benching in the freezer?!?

That is higher than M2 for multi-core, and is roughly equivalent to M1 Pro 8-core.

Geekbench has enough run to run variation that if enough people run it enough times, eventually they'll post a score that's near the top of its range for that hardware. When the first Geekbench results are produced for a product after release, we always see the "best score" go up a few percent until it pretty much stabilizes. In a week when millions of them are out there instead of thousands, it might go up a few more points yet.