• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Discussion Apple Silicon SoC thread

Page 449 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eug

Lifer
M1
5 nm
Unified memory architecture - LP-DDR4
16 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 12 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache
(Apple claims the 4 high-effiency cores alone perform like a dual-core Intel MacBook Air)

8-core iGPU (but there is a 7-core variant, likely with one inactive core)
128 execution units
Up to 24576 concurrent threads
2.6 Teraflops
82 Gigatexels/s
41 gigapixels/s

16-core neural engine
Secure Enclave
USB 4

Products:
$999 ($899 edu) 13" MacBook Air (fanless) - 18 hour video playback battery life
$699 Mac mini (with fan)
$1299 ($1199 edu) 13" MacBook Pro (with fan) - 20 hour video playback battery life

Memory options 8 GB and 16 GB. No 32 GB option (unless you go Intel).

It should be noted that the M1 chip in these three Macs is the same (aside from GPU core number). Basically, Apple is taking the same approach which these chips as they do the iPhones and iPads. Just one SKU (excluding the X variants), which is the same across all iDevices (aside from maybe slight clock speed differences occasionally).

EDIT:

Screen-Shot-2021-10-18-at-1.20.47-PM.jpg

M1 Pro 8-core CPU (6+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 16-core GPU
M1 Max 10-core CPU (8+2), 24-core GPU
M1 Max 10-core CPU (8+2), 32-core GPU

M1 Pro and M1 Max discussion here:


M1 Ultra discussion here:


M2 discussion here:


Second Generation 5 nm
Unified memory architecture - LPDDR5, up to 24 GB and 100 GB/s
20 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 16 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache

10-core iGPU (but there is an 8-core variant)
3.6 Teraflops

16-core neural engine
Secure Enclave
USB 4

Hardware acceleration for 8K h.264, h.264, ProRes

M3 Family discussion here:


M4 Family discussion here:


M5 Family discussion here:

 
Last edited:
you missed the memo: M4/M5 Macs can run almost all windows games and fast... via VM

At the moment it only can't do online gaming (anticheat doesn't like VM) and DX12
 
you missed the memo: M4/M5 Macs can run almost all windows games and fast... via VM

At the moment it only can't do online gaming (anticheat doesn't like VM) and DX12

That wins the "I will buy a Mac if I don't have to fully give up my PC gaming habit" market, but people looking for a laptop for gaming still aren't going to look at a Mac. Same reason they aren't going to look at Qualcomm or Nvidia ARM Windows laptops. If it ain't x86 you gotta deal with too much hassle with compatibility and it'll restrict what games you can play.
 
Three different validations just today about this.
 

Not suprising with the number of units sold every year since 2020.
 
[M6] wins the "I will buy a Mac if I don't have to fully give up my PC gaming habit" market

100%


but people looking for a laptop for gaming still aren't going to look at a Mac

Wrong, that will change. M6 will be very strong in mid-high range performance😛rice. Prophecy #172

If it ain't x86 you gotta deal with too much hassle with compatibility and it'll restrict what games you can play.


No, Windows VM runs everything except DX12. Similar emulation software in development too
 
Any information about Apple A20/M6?

I have been reading some speculation about core count increases. 2P+6E for A20 Pro, and 6P+6E for M6. Is it possible?
 
Any information about Apple A20/M6?

I have been reading some speculation about core count increases. 2P+6E for A20 Pro, and 6P+6E for M6. Is it possible?
N2 will easily give the power and transistor budget for that.
The question is not whether it is possible but whether Apple want to do it.
 
Any information about Apple A20/M6?

I have been reading some speculation about core count increases. 2P+6E for A20 Pro, and 6P+6E for M6. Is it possible?
There are rumours that the A20 will use six-channel LPDDR5X memory. WMCM (Wafer-Level Multi-Chip Module) packaging will also be used.
 
N2 will easily give the power and transistor budget for that.
The question is not whether it is possible but whether Apple want to do it.
Which in turn depends on where they see compute most required.

IF we believe that, over the next few years, the most important change in computing is going to be AI related, then it makes most sense to spend those transistors on AI rather than CPU computing. The claim above regarding wider memory (certainly giving more bandwidth, maybe also more capacity) suggests that [obtaining more DRAM might be tough, but providing higher bandwidth might be feasible].

This in turn suggests that more appropriate targets for new transistors might be either the GPU or the ANE. This could be in less obvious ways. For example. rather than just more GPU cores, they could enlarge the "neural engine" of each GPU core to handle more compute. Or, to deal with the size of NN weights, they could provide to the GPU the same sort of decompression HW that is already present on the ANE.

More CPU cores is an obvious path fwd, but may not be the best fit for the immediate future of these devices?
 
Which in turn depends on where they see compute most required.

IF we believe that, over the next few years, the most important change in computing is going to be AI related, then it makes most sense to spend those transistors on AI rather than CPU computing. The claim above regarding wider memory (certainly giving more bandwidth, maybe also more capacity) suggests that [obtaining more DRAM might be tough, but providing higher bandwidth might be feasible].

This in turn suggests that more appropriate targets for new transistors might be either the GPU or the ANE. This could be in less obvious ways. For example. rather than just more GPU cores, they could enlarge the "neural engine" of each GPU core to handle more compute. Or, to deal with the size of NN weights, they could provide to the GPU the same sort of decompression HW that is already present on the ANE.

More CPU cores is an obvious path fwd, but may not be the best fit for the immediate future of these devices?
The house of cards will fall down. In my opinion, and feel free to call me naive, is that right now ChatGPT and Gemini can do everything a non-power user needs.
 
Until now yes, but M5 Pro / M6 Base etc are going to be very good gaming machines in their price point

They will be very strong condenders in mid-range gaming laptop market. Unless you're going for real high-end 2.5kg windows laptop, Macbook will be a really good choice for gaming as ultra-lightweight, windows emulation becoming increasingly more efficient to support all games and run them faster

That wins the "I will buy a Mac if I don't have to fully give up my PC gaming habit" market, but people looking for a laptop for gaming still aren't going to look at a Mac. Same reason they aren't going to look at Qualcomm or Nvidia ARM Windows laptops. If it ain't x86 you gotta deal with too much hassle with compatibility and it'll restrict what games you can play.
How does gaming in Windows virtualised on Apple processors (and iGPUs) work in practice? Usually alternative GPU providers face huge issues with game compatibility, even Intel has rough time despite their graphics already being in the ecosystem for decades so it wasn't a complete new for game devs. I'm highly skeptical you can just install EGS and Steam and run games like it was a Windows PC with Radeon/GeForce graphics.

What sort of driver does the game running in the Windows VM actually communicate with? Does Apple provide Windows GPU driver with 3D API support? Is there some shim that tries to translate Vulkan and DX11/12 to Metal and tunnels the calls to the macOS host for execution? (IIRC that's how DX12 works in WSL)
 
How does gaming in Windows virtualised on Apple processors (and iGPUs) work in practice? [...]

What sort of driver does the game running in the Windows VM actually communicate with? Does Apple provide Windows GPU driver with 3D API support? Is there some shim that tries to translate Vulkan and DX11/12 to Metal and tunnels the calls to the macOS host for execution? (IIRC that's how DX12 works in WSL)

No idea but it runs well and fast enough, find M5 tests in youtube. Runs almost everything except DX12 and anti-cheat
 
How does gaming in Windows virtualised on Apple processors (and iGPUs) work in practice? Usually alternative GPU providers face huge issues with game compatibility, even Intel has rough time despite their graphics already being in the ecosystem for decades so it wasn't a complete new for game devs. I'm highly skeptical you can just install EGS and Steam and run games like it was a Windows PC with Radeon/GeForce graphics.

What sort of driver does the game running in the Windows VM actually communicate with? Does Apple provide Windows GPU driver with 3D API support? Is there some shim that tries to translate Vulkan and DX11/12 to Metal and tunnels the calls to the macOS host for execution? (IIRC that's how DX12 works in WSL)
It works through Crossover, which uses the Game Porting Toolkit.
 
The house of cards will fall down. In my opinion, and feel free to call me naive, is that right now ChatGPT and Gemini can do everything a non-power user needs.
You see AI as nothing but chatbots.
Apple and others see AI as a general functionality, like "graphics", that can be used to create a new UI, in the same way that "graphics" enabled a new UI.

And while there will doubtless be those (cf NeWS, X-Windows) who once again try for a networked version of this UI, it's more likely that it will be the local version that wins out.
 
You see AI as nothing but chatbots.
Apple and others see AI as a general functionality, like "graphics", that can be used to create a new UI, in the same way that "graphics" enabled a new UI.

And while there will doubtless be those (cf NeWS, X-Windows) who once again try for a networked version of this UI, it's more likely that it will be the local version that wins out.
I mean what else do you want to do with local AI other than things like better focus algos for the camera, or tagging every image the user takes so it is easily searchable?

Siri can already do everything most people want and its a dumb AI, the main selling point of these networked AIs is that you can talk to them like a human. And unless apple invents the worlds smallest TTS/STT reversible AI, the hardware limitations will not be on the SOC itself.
 
Apple is benefiting from switching to their own modem and wifi/BT chips at just the right time. They can absorb BOM increases in other places since they have the savings from that switch plus the fact they're making small/incremental changes to the phone as far as other expensive parts like display and cameras so the cost on those will likely be flat to declining.

It would be interesting if Apple detailed more about how they manage their long term contracts in their upcoming earnings call. I'm sure they'll be asked but who knows what they'll be willing to say. I find it hard to believe they've been negotiating all new contracts for 100% of the supply every six months like he seems to be claiming. I have to think it would be like 25% of their overall supply coming up for negotiations every six months for new two year contracts so that they slowly roll over contracts rather than having everything end and restart every six months. That makes no sense. I could easily believe that memory OEMs don't want to ink two year deals in the current climate and Apple would have to agree to much shorter terms (or much higher prices) If they're doing it like I suggest they won't see the full impact of memory price increases until the middle of 2027.

They've always talked about "long term" supply agreements, six months is not long term to me so I'd question their wording if that was really the case.
 

I looked at this and it's probably bogus. I mean, the Mercury Research numbers are probably as valid as you can get, but they don't show Apple marketshare, it's *ARM* i total. So including chromebooks and Qualcomm-based Windows notebooks. (Also the value for laptops is likely 17 % Arm, 18 % AMD and not 20 % as has appeared in some articles that didn't even look at the graphs properly).

Hint: look at the notebook chart which shows non-zero Arm marketshare prior to 2020.

Not suprising with the number of units sold every year since 2020.
The 17 % marketshare shown in the charts has allegedly been a thing in Q1 and Q3 2025 (Q2 seems to be around 16 %) so you can actually look at IDC, Canalys and Gartner statistics for Apple marketshare in computers in those quarters.

Edit: Apple was at just 9% marketshare in Q3 2025 according to IDC (first one I looked up, didn't check the other two).
 
Last edited:
Back
Top