Discussion Apple Silicon SoC thread

Page 419 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eug

Lifer
Mar 11, 2000
24,114
1,760
126
M1
5 nm
Unified memory architecture - LP-DDR4
16 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 12 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache
(Apple claims the 4 high-effiency cores alone perform like a dual-core Intel MacBook Air)

8-core iGPU (but there is a 7-core variant, likely with one inactive core)
128 execution units
Up to 24576 concurrent threads
2.6 Teraflops
82 Gigatexels/s
41 gigapixels/s

16-core neural engine
Secure Enclave
USB 4

Products:
$999 ($899 edu) 13" MacBook Air (fanless) - 18 hour video playback battery life
$699 Mac mini (with fan)
$1299 ($1199 edu) 13" MacBook Pro (with fan) - 20 hour video playback battery life

Memory options 8 GB and 16 GB. No 32 GB option (unless you go Intel).

It should be noted that the M1 chip in these three Macs is the same (aside from GPU core number). Basically, Apple is taking the same approach which these chips as they do the iPhones and iPads. Just one SKU (excluding the X variants), which is the same across all iDevices (aside from maybe slight clock speed differences occasionally).

EDIT:

Screen-Shot-2021-10-18-at-1.20.47-PM.jpg

M1 Pro 8-core CPU (6+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 16-core GPU
M1 Max 10-core CPU (8+2), 24-core GPU
M1 Max 10-core CPU (8+2), 32-core GPU

M1 Pro and M1 Max discussion here:


M1 Ultra discussion here:


M2 discussion here:


Second Generation 5 nm
Unified memory architecture - LPDDR5, up to 24 GB and 100 GB/s
20 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 16 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache

10-core iGPU (but there is an 8-core variant)
3.6 Teraflops

16-core neural engine
Secure Enclave
USB 4

Hardware acceleration for 8K h.264, h.264, ProRes

M3 Family discussion here:


M4 Family discussion here:

 
Last edited:

poke01

Diamond Member
Mar 8, 2022
4,196
5,542
106
I doubt the CPU will. Too much work for too few pennies.
But ARM should never try to play hardball with the $3T gorilla which has a history of fighting decade long battles to save pennies.
Nvidia and Google do the same too and use RISC-V cores as coprocessors in their GPUs and TPUs respectively.

I guess saving costs applies to any business, not just Apple.

Nope, is just using co processors with RISC-V.
That too just for the encoders.
 

gdansk

Diamond Member
Feb 8, 2011
4,567
7,679
136
Nvidia and Google do the same too and use RISC-V cores as coprocessors in their GPUs and TPUs respectively.

I guess saving costs applies to any business, not just Apple.
I can't speak about Google, but Nvidia used their own non-ARM internal cores (Falcon) before that. Cost-cutting, but not exactly the same. Mainly for tool chain improvements.
 

poke01

Diamond Member
Mar 8, 2022
4,196
5,542
106
It’s a matter of time before more companies switch to risc-v for embedded. It’s more flexible, I don’t think it’s mainly about cost. These companies already have licenses with ARM, whatever costs they save is a benefit.
 

gdansk

Diamond Member
Feb 8, 2011
4,567
7,679
136
It’s a matter of time before more companies switch to risc-v for embedded. It’s more flexible, I don’t think it’s mainly about cost. These companies already have licenses with ARM, whatever costs they save is a benefit.
It's mainly about cost. The per core fees are a waste of money when embedded. r5 compilers are good enough and r5 cores are good enough. Apple has unilaterally added custom instructions (and some retroactively blessed) to their ARM cores anyway. I don't think it's about flexibility.

It needn't be matter of time for CPUs because they don't want to port software again. The cost of that effort divided among all the cores they ship is the highest fee Apple is willing to pay. ARM must keep that in mind when negotiating license agreements with Apple. That's their pound of flesh and not an ounce more.
 
  • Like
Reactions: Tlh97 and Mopetar

Doug S

Diamond Member
Feb 8, 2020
3,567
6,303
136
It's mainly about cost. The per core fees are a waste of money when embedded. r5 compilers are good enough and r5 cores are good enough. Apple has unilaterally added custom instructions (and some retroactively blessed) to their ARM cores anyway. I don't think it's about flexibility.

It needn't be matter of time for CPUs because they don't want to port software again. The cost of that effort divided among all the cores they ship is the highest fee Apple is willing to pay. ARM must keep that in mind when negotiating license agreements with Apple. That's their pound of flesh and not an ounce more.

Well ARM did say they were going to be making a lot more revenue from their cores in the future, and Apple's ALA was extended though 2040 or so a few years ago. Maybe using ARM designed cores is adding up to enough that it is worth them to design their own M0 class core? No idea how ALA is structured but if part of the royalties are based on simply counting "cores" then regardless of whether Apple designed their own cores rather than using ARM designed cores even a few pennies per core could start to add up if there are dozens of these things in there.

Don't tools like Cadence have some simple RISC-V core IP for TSMC processes? Apple might not even need to design them if they can get what they need from a library. Either way it doesn't bode well for ARM if Apple is taking this step, because if a $3.5 trillion company thinks that's worth it, imagine what a company 1/10000th of their size thinks? If ARM's/Softbank's greed kills the use of ARM for M0 class embedded roles by overcharging and drives that market into RISC-V's arms (sorry) they could be setting themselves up to be eaten from below by a cheaper competitor once it has "grown up". Just like mainframes. Just like minis. Just like RISC workstations. Just like x86 (at least at Apple, and to some extent in servers and completely in mobile)
 

mikegg

Golden Member
Jan 30, 2010
1,975
577
136
I know this isn't what you were talking about as far as OpenAI but something I found quite interesting were the discussions WRT to Oracle's deal with OpenAI to invest $300 billion in Oracle cloud starting in 2027. OpenAI has yearly REVENUE of $10 billion. It doesn't matter how profitable that revenue is, even if it is pure profit that's far far short of what's required just to meet that one spending commitment.

Doing so requires either revenue growth greater than an order of magnitude within the next couple years and/or massive external investment (i.e. substantial dilution for the current shareholders) Hence why Oracle declined more than 10% from the massive boost that announcement initially gave it - smart investors are wondering how much of that $300 billion Oracle will ever see.
I've been thinking about this OpenAI and Oracle deal. Initially, I was like you - there is no way. The more I thought about it, the more it can happen.

It's $300b starting in 2027 and spans 5 years. So it could be $20b in year one, for example, and then scales up.

OpenAI's revenue this year is about $13b. They grew 3-4x in revenue this year. Next year, they could make $39 - $52b in revenue if this growth rate continues. In 2027, let's say they double instead. That's $78b - $104b. More than enough to pay $20b to Oracle for the first year and then keep scaling.

Another lever they have is IPO. I'm willing to bet that with all the AI hysteria, OpenAI can IPO today for $1 trillion valuation and pocket a cool $200b - $300b.

So the $300b deal is not that farfetched. At first, I thought it was unlikely that Oracle would ever see that $300b. Now I think it's more likely than not.

One possible reason Oracle is winning in this is because they're the only big tech company to not design their own AI chips. Meta, Microsoft, Amazon, Google and even Apple have all decided to design their own chips. Nvidia is not stupid. They might be planning to give first dips to Oracle on the newest Nvidia chips. Therefore, Oracle is becoming THE cloud company to access the latest Nvidia chips in bulk. Oracle is the company Nvidia might be partnering to counter internal big cloud AI chips.
 
Last edited:

Eug

Lifer
Mar 11, 2000
24,114
1,760
126
Ming Chi-kuo says that when the MacBook Pro gets OLED, it will get a touch screen too, some time in 2026.


Regarding the iPhone 17 series:

Engadget said the new neural accelerators in A19 Pro make for a very improved AI experience. They also say the cooling makes a noticeable difference in day-to-day use.


In general, the iPhone 17 Pro stayed cool — and that’s both during the first few days with the case on and after I removed it altogether. When I played Snake Crash for about 25 minutes, I started noticing some gentle warmth emanating from the camera plateau. I put the device down on a terry cloth blanket and picked up the iPhone 16 Pro to play on instead, and just five minutes later it had gotten as warm as the 17 Pro. Ten minutes later, I had to adjust my fingers so the iPhone 16 Pro didn’t feel like an iron.

That’s not to say the iPhone 17 Pro never got noticeably hot in my testing, by the way. In my experience, generating photos in Image Playground or creating Genmoji typically caused my iPhone 16 Pro to heat up to scary levels. On the iPhone 17 Pro, it took a slightly longer time to get as warm, but it did eventually become so hot I felt the need to warn people if I were to hand the device off. I found the aluminum parts of the handset to be the hottest, which makes sense both scientifically and in the way our skin perceives temperatures.

I do want to commend Apple for the improved performance in Image Playground and Genmoji. It used to take ages for AI-generated pictures or emoji to appear (especially those based on a picture of someone in my photo album), but on the iPhone 17 Pro I was able to get several options in succession before things slowed down. Pictures where I opted to use ChatGPT’s more realistic styles took a lot longer, but by and large I saw a marked improvement in speed here. Those neural accelerators in the A19 Pro’s six-core GPU are certainly pulling their weight.

It might be worth noting that in the 25 minutes of Snake Clash time, the iPhone 17 Pro’s battery level dropped about ten percent. The iPhone 16 Pro went from 90 percent to 79 percent in roughly the same duration, so power efficiency in this specific use case seems fairly similar.


Apple could also use RISC-V in their radio silicon, like C1, C1X, N1, etc.
Wow, first and only post in over a year! :)
 

regen1

Member
Aug 28, 2025
86
139
61
Geekerwan review
s2.png
P-Cores


s.png
E-cores

Lots of other benchmarks
 

Eug

Lifer
Mar 11, 2000
24,114
1,760
126

DZero

Golden Member
Jun 20, 2024
1,623
629
96
No videos of the iOS games on the iPhone 17 series, time to wait to see how optimized it became. It is supposed to run native 480p with Ray Tracing without issues.
 

Eug

Lifer
Mar 11, 2000
24,114
1,760
126
Apple has done a great job with its new Apple10 GPU architecture. M5 chips will further narrow the gap with Nvidia.
What's with the monster A19 Pro FP16 score?

1758125890396.png

And what does this demo test? It's handily beating M3 and M4.

1758125870320.png

EDIT:

Auto-translate says: "GPU Light Tracking Test (Magic Demo)"
 
  • Like
Reactions: Tlh97 and Mopetar