Discussion Apple Silicon SoC thread

Page 160 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eug

Lifer
Mar 11, 2000
23,586
1,000
126
M1
5 nm
Unified memory architecture - LP-DDR4
16 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 12 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache
(Apple claims the 4 high-effiency cores alone perform like a dual-core Intel MacBook Air)

8-core iGPU (but there is a 7-core variant, likely with one inactive core)
128 execution units
Up to 24576 concurrent threads
2.6 Teraflops
82 Gigatexels/s
41 gigapixels/s

16-core neural engine
Secure Enclave
USB 4

Products:
$999 ($899 edu) 13" MacBook Air (fanless) - 18 hour video playback battery life
$699 Mac mini (with fan)
$1299 ($1199 edu) 13" MacBook Pro (with fan) - 20 hour video playback battery life

Memory options 8 GB and 16 GB. No 32 GB option (unless you go Intel).

It should be noted that the M1 chip in these three Macs is the same (aside from GPU core number). Basically, Apple is taking the same approach which these chips as they do the iPhones and iPads. Just one SKU (excluding the X variants), which is the same across all iDevices (aside from maybe slight clock speed differences occasionally).

EDIT:

Screen-Shot-2021-10-18-at-1.20.47-PM.jpg

M1 Pro 8-core CPU (6+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 16-core GPU
M1 Max 10-core CPU (8+2), 24-core GPU
M1 Max 10-core CPU (8+2), 32-core GPU

M1 Pro and M1 Max discussion here:


M1 Ultra discussion here:


M2 discussion here:


Second Generation 5 nm
Unified memory architecture - LPDDR5, up to 24 GB and 100 GB/s
20 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 16 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache

10-core iGPU (but there is an 8-core variant)
3.6 Teraflops

16-core neural engine
Secure Enclave
USB 4

Hardware acceleration for 8K h.264, h.264, ProRes

M3 Family discussion here:

 
Last edited:

poke01

Senior member
Mar 8, 2022
725
697
106
The whole reason Apple was “competitive” in the first place was they were on a far superior node.
I don't this argument. So is AMD ahead of Intel because they are on a superior node. This almost never gets mentioned when comparing Intel and AMD. Most PC fans dunk on Intel and don't even mention that AMD is on TSMC 5nm.

But with Apple "oh my but Apple is on a better node". Guess what your avg consumer don't even know what nodes are. Apple is selling boat loads of M1 and M2 laptops. Considering starting price is $899 from edu pricing they make a lot of money.
 

Mopetar

Diamond Member
Jan 31, 2011
7,835
5,981
136
It's frankly incredible how powerful Apple's P cores are. 12900HK scores ~1650 while Apple M2 does ~1900, all the while consuming less power and using lower frequency.

I think it's because it's freakishly wide (I think it's 8-wide) and has much larger caches. It can just get more done in a clock cycle so the lower speed doesn't hurt it nearly as much.

Intel or AMD could design a chip with similar characteristics, but it's hard to build a single core that can function equally well in both a high power desktop situation and an ultra low power mobile scenario.

I don't this argument. So is AMD ahead of Intel because they are on a superior node. This almost never gets mentioned when comparing Intel and AMD. Most PC fans dunk on Intel and don't even mention that AMD is on TSMC 5nm.

It was discussed frequently when Intel had a lead over AMD before Global Foundries were spun off. People wondered at various points how much closer AMD chips would be if they had Intel's node. It kind of died off when Bulldozer came out because even a better process wasn't going to save that from Core 2 and subsequent CPUs.

The node names that the companies use are more or less marketing terms and if you look at the actual node characteristics, TSMC isn't that much better than Intel (or vice versa) that it comes close to the node advantage that Intel had for much of it's history.
 
  • Like
Reactions: Tlh97 and scineram

Doug S

Platinum Member
Feb 8, 2020
2,254
3,485
136
I don't this argument. So is AMD ahead of Intel because they are on a superior node. This almost never gets mentioned when comparing Intel and AMD. Most PC fans dunk on Intel and don't even mention that AMD is on TSMC 5nm.

But with Apple "oh my but Apple is on a better node". Guess what your avg consumer don't even know what nodes are. Apple is selling boat loads of M1 and M2 laptops. Considering starting price is $899 from edu pricing they make a lot of money.


Yep, consumers buy what they buy. If Cyrix rose from the dead and used time travel fab technology from 2035 to introduce CPUs that could run rings around anything Apple, AMD or Intel will offer the rest of this decade in power and performance no one is going to listen to cries of "but its not a fair comparison they are using technology from the future". It would still be the smartest purchase a consumer could make.
 

poke01

Senior member
Mar 8, 2022
725
697
106
The node names that the companies use are more or less marketing terms and if you look at the actual node characteristics, TSMC isn't that much better than Intel (or vice versa) that it comes close to the node advantage that Intel had for much of it's history.
but look at the battery life figures that AMD and Intel have. AMD always has better battery life than Intel. Surely, the node plays a role.
 

moinmoin

Diamond Member
Jun 1, 2017
4,944
7,656
136
but look at the battery life figures that AMD and Intel have. AMD always has better battery life than Intel. Surely, the node plays a role.
Due to pursuing performance first Intel's mobile offerings became worse efficiency wise in the last couple generations. While node does play a role, in Intel's case the lack of focus on efficiency is also crucial.
 
Jul 27, 2020
16,165
10,240
106
While node does play a role, in Intel's case the lack of focus on efficiency is also crucial.
Very true. In multiple laptop reviews where the reviewers compared Intel/AMD laptops head to head, they mentioned how AMD chips sipped power and rarely made the fan go full speed whereas on the Intel side, really loud fans for prolonged time periods are very common. Intel does win some benchmarks important to the average user but it comes at the cost of reduced battery life and more heat/noise output.
 
  • Like
Reactions: Tlh97 and scineram

poke01

Senior member
Mar 8, 2022
725
697
106
Due to pursuing performance first Intel's mobile offerings became worse efficiency wise in the last couple generations. While node does play a role, in Intel's case the lack of focus on efficiency is also crucial.
Yes but even when they focus on efficiency for some moblw/laptop chips it not as good as AMD and Apple.
 

mikegg

Golden Member
Jan 30, 2010
1,755
411
136
Apple took 20 months to update M1 to M2, which is actually slower than the old X series average.

Furthermore, if M3 arrives in June 2023, I’ll be quite surprised. I’m guessing somewhere between fall 2023 and spring 2024.
We don't actually know when the M2 was ready.

There are plenty of reputable rumors and reports that the M2 was ready well before the new Macbook Air design was ready. In fact, there were reports that Apple wanted to launch the M2 with the 13" MBP first and not wait for the new Air design.

Again, we don't need to keep repeating ourselves here. Covid work from home, supply chain issues, brand new Mac designs coupled with new SoCs meant that we don't know the actual update cadence Apple wants.

Logic dictates that Apple should update the M chips once a year due to economy of scale, volume of products that depend on new chips, and the fact that Apple made the M2 using A15.
 

Ajay

Lifer
Jan 8, 2001
15,431
7,849
136
Apple took 20 months to update M1 to M2, which is actually slower than the old X series average.

Furthermore, if M3 arrives in June 2023, I’ll be quite surprised. I’m guessing somewhere between fall 2023 and spring 2024.
It's very possible if Apple has gone with M3 on TSMC's N3. Highly improbably if M3 is going to be on TSMC N3E (and only if Apple is using risk production wafers for something like the MacPro).
I'm sure Apple's SoC teams have completely recovered by now from the Nuvia exodus and related departures.
 

Eug

Lifer
Mar 11, 2000
23,586
1,000
126
We don't actually know when the M2 was ready.

There are plenty of reputable rumors and reports that the M2 was ready well before the new Macbook Air design was ready. In fact, there were reports that Apple wanted to launch the M2 with the 13" MBP first and not wait for the new Air design.

Again, we don't need to keep repeating ourselves here. Covid work from home, supply chain issues, brand new Mac designs coupled with new SoCs meant that we don't know the actual update cadence Apple wants.

Logic dictates that Apple should update the M chips once a year due to economy of scale, volume of products that depend on new chips, and the fact that Apple made the M2 using A15.
I guess I haven't seen the same rumours.

In any case, the thing is Apple doesn't sell bare chips. They sell complete machines. Given that the M2 Pro and M2 Max and even the M2 Mac mini aren't even out yet and likely won't even be out until spring, I think it's unlikely M3 is coming by June unless you believe Apple is going to skip some of these models.

I suspect what may happen is the MacBook Pro M2 Pro/Max will be released in the spring, likely with an M2 Mac mini around the same time. Then the M2 Max/Ultra Mac Studio will be released months later. Then after that, M3 will come out.

Optimistically, that means fall 2023 for M3, but it could be later.

---

On another note, after returning my scam 2017-in-a-2022-box Apple TV 4K, I ordered another open box, and this time got a pristine brand new looking 2022 Apple TV 4K 128 GB with 4 GB RAM. With its A15 SoC, it is clearly faster for tvOS navigation than my 2017 Apple TV 4K 32 GB with 3 GB RAM and A10X. Also, it's faster for loading and starting videos. However, it wasn't as if the 2017 was actually slow. It's just that the 2022 is faster. We'll see what happens when I start with Apple Arcade though.

As for the Mac mini M1 I got, my other complaint is its pickiness with monitor support. A lot of stuff that works on Intel Macs doesn't work on M1. If I use USB-C to DisplayPort dongles (and I've tried several), the machine won't wake the monitor from sleep. Granted, this is with an old dual-link DVI Apple Cinema Display with a mini-DP to DL-DVI Apple OEM adapter, but others report the same issue with modern DisplayPort monitors. If I use third party USB-C to DL-DVI dongles, it will wake up the monitor each time, albeit with a 4-8 second delay, but more importantly I lose HDCP support. This stuff works fine on my 2017 Intel MacBook with the same monitor and dongles (and with HDCP support), but not on my 2020 M1 Mac mini.

Because of this, and because my old monitor now has some burn-in, I've ordered a new modern USB-C 1440p 32" monitor. We'll see how that goes.
 
Last edited:

Mopetar

Diamond Member
Jan 31, 2011
7,835
5,981
136
but look at the battery life figures that AMD and Intel have. AMD always has better battery life than Intel. Surely, the node plays a role.

Some of that comes down to the actual core design. Even if the nodes have similar density, AMD just has an overall smaller core that's going to take less power to drive. Even the OG Zen was beating Intel chips with a higher TDP, and that's when AMD was still using Global Foundries and an inferior process node.
 

mikegg

Golden Member
Jan 30, 2010
1,755
411
136
I guess I haven't seen the same rumours.

In his latest "Power On" newsletter, Gurman said that Apple originally planned to launch its new ‌MacBook Air‌ with "an all-new design, MagSafe, the M2 chip, and more" at the end of 2021 or in early 2022, but this timeframe has now seemingly slipped to the second half of 2022.


Bloomberg originally reported that the new ‌MacBook Air‌ could come as soon as late 2021, but given that Apple's last event of the year has likely come and gone, a launch next year is much more probable.

Plenty of rumors pointing to M2 in late 2021 or early 2022. Rumors were that the M2 was ready much earlier but the new Air wasn't. Remember that the M1 Pro/Max themselves were rumored to have been severely delayed.

Covid, work from home, supply chain issues, brand new chassis designs.

We don't know the actual cadence Apple wants. We cannot deduce that Apple wants an 18-20 month cadence from 1 update cycle during black swan conditions. Logically, it makes more sense for Apple to update the M series once a year (at least the base M).

Here's what we do know:
  • Many reputable reports of delays of new chassis designs holding back new SoC launches
  • M2 is based on A15, suggesting that Apple wants to re-use iPhone cores each year for M
  • Reports are that M2 will be a shortlived generation
  • Base M chips power far more devices and have a much higher volume than old iPad X chips which suggests that Apple would want to update it more frequently
  • Intel has updated their Mac chips more frequently than the 18-20 months you suggested. One major reason Apple decided to make its own chips is that they were tired of Intel delays.
 
Last edited:
  • Like
Reactions: Eug

Eug

Lifer
Mar 11, 2000
23,586
1,000
126






Plenty of rumors pointing to M2 in late 2021 or early 2022. Rumors were that the M2 was ready much earlier but the new Air wasn't. Remember that the M1 Pro/Max themselves were rumored to have been severely delayed.

Covid, work from home, supply chain issues, brand new chassis designs.

We don't know the actual cadence Apple wants. We cannot deduce that Apple wants an 18-20 month cadence from 1 update cycle during black swan conditions. Logically, it makes more sense for Apple to update the M series once a year (at least the base M).

Here's what we do know:
  • Many reputable reports of delays of new chassis designs holding back new SoC launches
  • M2 is based on A15, suggesting that Apple wants to re-use iPhone cores each year for M
  • Reports are that M2 will be a shortlived generation
  • Base M chips power far more devices and have a much higher volume than old iPad X chips which suggests that Apple would want to update it more frequently
  • Intel has updated their Mac chips more frequently than the 18-20 months you suggested. One major reason Apple decided to make its own chips is that they were tired of Intel delays.
Ah, I see where you're coming from. The point I was making is Apple's cadence depends on its machines, not (just) its chips.

My expectation was always that the new M2 MacBook Pro would come out at the same time as the M2 MacBook Air, and not before, regardless if some of the machines got delayed by supply chain issues. From a marketing point of view, it wouldn't make much sense to release an otherwise unchanged M2 MacBook Pro while leaving its #1 bestseller, the MacBook Air, to languish on M1 on the old design.

Also, one thing I've learned over the years is that predictions based on supply chain rumours are often off by many months. The best supply chain leakers often get the "what" right, but often way too early about the "when". Furthermore, there is usually a 2 month lag time or sometimes much more, from the time SoCs are available in volume until actual completed machines are on retail shelves in volume.

I think we need to get out of the mindset that Apple as a chipmaker must do chipmaker things in the same timelines as other chipmakers. Apple is a computing device maker (and a subscription service provider), who also happens to design its own chips now. The desired end products determine what the chips will become, not the other way around.

---

On another note, I finally got my new monitor up and running. It's an Asus ProArt 32" PA328CGV 1440p 165 Hz HDR600 IPS DisplayPort 1.4 model, with Calman colour calibration. As such the colours are fantastic and HDR is nice too. 120 Hz is also a nice bonus. However, being 1440p and macOS, the text quality isn't great. It's a real shame that Apple has abandoned non-Retina display support in the OS. Yes it works, but text quality in 2022 is actually worse on macOS than it was 5 years ago.

A 5K 30" 120 Hz HDR600 display would be perfect for the midrange, but the technology is just not there yet, and that is out of Apple's control. It would also likely cost a fortune. I guess we'll have to wait for DisplayPort 2.1 to become mainstream I guess. I wonder if that's going to be part of the Apple M3 series SoCs.
 
  • Like
Reactions: scineram

poke01

Senior member
Mar 8, 2022
725
697
106
Some of that comes down to the actual core design. Even if the nodes have similar density, AMD just has an overall smaller core that's going to take less power to drive. Even the OG Zen was beating Intel chips with a higher TDP, and that's when AMD was still using Global Foundries and an inferior process node.
So is Intel just bad at core design?? What are their chips even good for?
 
Jul 27, 2020
16,165
10,240
106
So is Intel just bad at core design?? What are their chips even good for?
Possibly. They have a brute force approach that's so far working for them. However, it fails them in the thin and light laptop segment where AMD and ARM laptops offer much better battery life. Meteor Lake is supposed to be a step in the right direction to fix that. However, if Qualcomm powered Windows on ARM laptops take the world by storm before it sees the light of day, it will have the fight of its life.
 

Mopetar

Diamond Member
Jan 31, 2011
7,835
5,981
136
So is Intel just bad at core design?? What are their chips even good for?

Not really, and we've seen that with the e-cores they've developed that are considerably more area efficient (they can fit about 4 of them into the same area as their regular cores) and have let Intel catch back up to AMD in heavily threaded workloads.

Intel's problem was that for years they had no competition and no real need to completely overhaul their design and some they just kept building on top of what they already had. Considering how good that core was when it first came out, this wasn't a bad idea. At first.

Eventually they got caught flat footed and with the issues surrounding getting their own nodes to work, they likely didn't want to compound problems by making a radical new design at the same time.

Intel has made plenty of good microarchitectures over the years, but any company that doesn't face competition will grow lazy. No one wants to take what would be considered a major financial risk to build a new core from the ground up, so they lumber along and apply other solutions.
 
Feb 17, 2020
100
245
116
So is Intel just bad at core design?? What are their chips even good for?

Their big core team specifically uses extremely outdated design methodologies, resulting in cores that are significantly lower utilization and higher area and power than what other design teams (including the Atom team) pump out. But they do get high max clocks, so marketing's happy.
 
Jul 27, 2020
16,165
10,240
106
No one wants to take what would be considered a major financial risk to build a new core from the ground up, so they lumber along and apply other solutions.
AMD doesn't shy away from doing that. They likely have great technical meetings of the CPU designers with good back and forth discussion and no rank pulling to force design ideas down everyone's throat. Something that Jim Keller may have inspired them to adopt.

Whereas at Intel, they have more heated arguments than useful constructive discussions and if they end up making a mistake, Intel Israel is always ready with something to dig them out of their hole.
 

Mopetar

Diamond Member
Jan 31, 2011
7,835
5,981
136
AMD didn't really have a choice to take a big risk. But you can also fairly point out that making a big, bold change with Bulldozer bit them squarely in the ass. Not every new design works out, just as Intel learned in the P4 days.

When you're already in a losing position or at the point where you have nothing to lose, a Hail Mary play becomes a reasonable choice. It's not weird that Intel being in a dominant position would have made the more conservative choices that have left them at a disadvantage and most investors at the time would have agreed that rather than burning hundreds of millions of dollars on something that may not pan out, that money should just be returned to the shareholders.

Internal company politics are a different matter completely. Frankly, anyone who isn't actually there has less of a useful picture than they think. Maybe some of Intel's woes can be attributed to this, but I don't think it's the primary case for why their big cores are as bloated as they are.
 

Roland00Address

Platinum Member
Dec 17, 2008
2,196
260
126
So is Intel just bad at core design?? What are their chips even good for?

Talking long time frame here. But the first to fourth generation of the core Intel stuff is when Intel finally got “good enough” performance where everything felt fast as long as one had enough ram, ssd, and the GPU took care of some of the work. That is 2008 to 2013 time frame and I did not pick a specific chip for we are talking TDP and add on cards (like the gpu is it a separate card, on the motherboard, on die. Not until 2013 did we get all of these really, and also an acceptable laptop / ultra book TDP instead of Desktop.)

But during that same time frame we learn Cell Phones were the future, and let’s be honest the 2013 Cell Phones stank! The iPhone 5S debuted in 9/2013 and I will argue it was the first “good” Cell Phone CPU with the A7. And in my history, the irony is this is the first Apple designed SoC with non stock ARM cores and was 64 bit (the iPhone 5 was the also good cell phone chip the A6, good for a Cell Phone chip but not a workhorse chip.)

But that was 9 Years Ago! And while Intel has been improving their silicon on a design level, they been saddled with foundries there were always years behind. (This can be debated, I do not care to debate it but I can see goals that I do not share are valid arguements.)

My argument is the main goal should have been efficiency ever since that good enough wall occurred. Sure there would be performance improvements on total performance (not performance per watt) that are still achievable, but those would occur naturally (about 80% of them) if one focused on performance per watt.