Discussion Apple Silicon SoC thread

Page 405 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eug

Lifer
Mar 11, 2000
24,048
1,679
126
M1
5 nm
Unified memory architecture - LP-DDR4
16 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 12 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache
(Apple claims the 4 high-effiency cores alone perform like a dual-core Intel MacBook Air)

8-core iGPU (but there is a 7-core variant, likely with one inactive core)
128 execution units
Up to 24576 concurrent threads
2.6 Teraflops
82 Gigatexels/s
41 gigapixels/s

16-core neural engine
Secure Enclave
USB 4

Products:
$999 ($899 edu) 13" MacBook Air (fanless) - 18 hour video playback battery life
$699 Mac mini (with fan)
$1299 ($1199 edu) 13" MacBook Pro (with fan) - 20 hour video playback battery life

Memory options 8 GB and 16 GB. No 32 GB option (unless you go Intel).

It should be noted that the M1 chip in these three Macs is the same (aside from GPU core number). Basically, Apple is taking the same approach which these chips as they do the iPhones and iPads. Just one SKU (excluding the X variants), which is the same across all iDevices (aside from maybe slight clock speed differences occasionally).

EDIT:

Screen-Shot-2021-10-18-at-1.20.47-PM.jpg

M1 Pro 8-core CPU (6+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 16-core GPU
M1 Max 10-core CPU (8+2), 24-core GPU
M1 Max 10-core CPU (8+2), 32-core GPU

M1 Pro and M1 Max discussion here:


M1 Ultra discussion here:


M2 discussion here:


Second Generation 5 nm
Unified memory architecture - LPDDR5, up to 24 GB and 100 GB/s
20 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 16 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache

10-core iGPU (but there is an 8-core variant)
3.6 Teraflops

16-core neural engine
Secure Enclave
USB 4

Hardware acceleration for 8K h.264, h.264, ProRes

M3 Family discussion here:


M4 Family discussion here:

 
Last edited:

Jan Olšan

Senior member
Jan 12, 2017
547
1,084
136
It's trickier than that. The reason CP performed so poorly on the PS4 is that the game was designed to stream assets and the relatively slow spinning drives in stock PS4 couldn't do what even a crappy SSD, let alone a good PS5 stock SSD could do. On PS4 the game was IO constrained, not GPU or CPU constrained. CP on a PS4 often didn't even turn the fan on because the game was so frequently just waiting on HD seek. How you shove 168GB of assets across a storage bus, then keep in CPU RAM, then shove over a PCIe bus, and prioritize to stay on the GPU is no small feat as any one of them that goes into the weeds tanks your performance.
I don't think I/O is a major issue that often. Or perhaps better said, this is more often a different core issue: VRAM capacity being insufficient on cheap (8GB, starting to hit 12GB) graphics cards. The topic has been discussed a lot recently.
And ARM Apple platforms are not going to use HDDs or low-sequential speed SSDs.
Apple is presenting the industry with the Cell processor all over again. In theory it might be better but if you have to completely reoptimize your game to get to better, the economics may not exist to allow that to happen.
Hmm, I really don't think it is anything that drastic by far. It's retargetting to a pretty standard GPU acceleration model (whereas Cell... lol). What they have to handle is really to have well performing driver, good compiler (I forgot to specify it, but above I meant shader compiler that is part of GPU drivers, not anything related to usual compilers) and generally a driver that will be able to properly utilise the GPU's units. That's really business as usual for GPU vendors, nothing revolutionary or paradigm-shifty. But at the same time it's hard business as usual only few can do, as mentioned.
Sure, suddenly dealing with a new lineage of GPU architecture and software stack is going to trip you in unexpected places. Different approaches (number of threads, long-lived vs. short lived threads) may be needed to get good occupancy. But it is not going to be THAT different from when Intel became a game graphics target.
 
Last edited:

smalM

Member
Sep 9, 2019
82
92
91
No, just knowing what words mean... Hint - what's the relationship between power and energy...

What original poster said is essentially correct. I explained this repeatedly with the 2 came out, I'm not going to waster my time doing so again.
Yeah, we know you are only playing Joules consumed and frames processed while all others have fun playing a game.

BTW it would be much nicer to have a conversation with if you wouldn't consider yourself to be the only intelligent person around.
 

name99

Senior member
Sep 11, 2010
614
511
136
Yeah, we know you are only playing Joules consumed and frames processed while all others have fun playing a game.

BTW it would be much nicer to have a conversation with if you wouldn't consider yourself to be the only intelligent person around.
Somewhat fair.
But it's also hard to have a conversation if people refuse to use words correctly and assume that the person on the other side can psychically understand exactly what they mean in spite of the words used. Energy vs power have specific meanings in this context, and you cannot explain the issue if you don't use them appropriately.
 

Doug S

Diamond Member
Feb 8, 2020
3,336
5,832
136

That reminds me of something I speculated on / wondered about.

The more products Apple has that don't follow any set release schedule but share the same P cores, E cores, GPU cores and/or NPU cores the less incentive Apple has for targeting a "new core" to be done in sync with the iPhone's schedule. So I wonder whether the teams are still trying to schedule their work based the iPhone's schedule, or if some of them might be working off other schedules - i.e. maybe the GPU core is seen as more important to the Mac than the iPhone so they target their work towards the planned release schedule of the Mac (if indeed Apple even has any product announcement/release schedule for it more than a year in advance)

Or perhaps they no longer work on any schedule that's related to a specific product. So maybe one version of the P core they are making more modest changes and it is planned to take 9 months after the previous one for that core to be "ready" (i.e. available for designers working on SoCs for various products to use that core) but the one after that is making bigger changes and they plan on 16 months for that core to be ready.

The virtue of that would be if the schedule slips a bit it is less of an issue, and if you beat your schedule there can be some benefit derived from it. In the past if the core development schedule was tied to the iPhone schedule if you were late there would be hell to pay, and if you were early it was pointless.
 
  • Like
Reactions: name99

name99

Senior member
Sep 11, 2010
614
511
136
Tim Cook once told TSMC that Intel didn't know how to be a foundry - essentially that Intel didn't know how to meet the customers needs and not try and control everything. I would presume that has changed, but who knows.
I wouldn't put it quite like that.
But what I have seen repeatedly through multiple iterations of this sorry mess is that they simply cannot or will not believe that other people have different priorities.

nV cares PRIMARILY about density.
Apple cares PRIMARILY about energy.
Intel cares PRIMARILY about GHz -- and simply cannot fathom that Apple and nV (who act as proxies for mobile as a whole and AI as a whole) are not impressed with higher GHz at the cost of what actually matters to them.

Intel screwed this up the first time round, then learned absolutely nothing from the experience going in the second time round.