Discussion Apple Silicon SoC thread

Page 36 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eug

Lifer
Mar 11, 2000
23,583
996
126
M1
5 nm
Unified memory architecture - LP-DDR4
16 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 12 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache
(Apple claims the 4 high-effiency cores alone perform like a dual-core Intel MacBook Air)

8-core iGPU (but there is a 7-core variant, likely with one inactive core)
128 execution units
Up to 24576 concurrent threads
2.6 Teraflops
82 Gigatexels/s
41 gigapixels/s

16-core neural engine
Secure Enclave
USB 4

Products:
$999 ($899 edu) 13" MacBook Air (fanless) - 18 hour video playback battery life
$699 Mac mini (with fan)
$1299 ($1199 edu) 13" MacBook Pro (with fan) - 20 hour video playback battery life

Memory options 8 GB and 16 GB. No 32 GB option (unless you go Intel).

It should be noted that the M1 chip in these three Macs is the same (aside from GPU core number). Basically, Apple is taking the same approach which these chips as they do the iPhones and iPads. Just one SKU (excluding the X variants), which is the same across all iDevices (aside from maybe slight clock speed differences occasionally).

EDIT:

Screen-Shot-2021-10-18-at-1.20.47-PM.jpg

M1 Pro 8-core CPU (6+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 16-core GPU
M1 Max 10-core CPU (8+2), 24-core GPU
M1 Max 10-core CPU (8+2), 32-core GPU

M1 Pro and M1 Max discussion here:


M1 Ultra discussion here:


M2 discussion here:


Second Generation 5 nm
Unified memory architecture - LPDDR5, up to 24 GB and 100 GB/s
20 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 16 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache

10-core iGPU (but there is an 8-core variant)
3.6 Teraflops

16-core neural engine
Secure Enclave
USB 4

Hardware acceleration for 8K h.264, h.264, ProRes

M3 Family discussion here:

 
Last edited:

beginner99

Diamond Member
Jun 2, 2009
5,208
1,580
136
Also, how do you guys think apple will handle gpus in their higher end macbooks and eventually desktops next year?

No idea but maybe remove the iGPU and put in more CPU cores. And use AMD as they have in the past. given the iGPU size they can probably double the core by removing it and keeping die size sane. More tricky will be adding all the additional IO which takes a lot of space and doesn't scale as much with process tech.
 

moinmoin

Diamond Member
Jun 1, 2017
4,933
7,619
136
BTW, I wonder when Intel truly started to believe Apple would leave them. Back in the A7 era, when Apple launched the first 64-bit ARM phone/tablet chip? Probably much earlier actually.
For Intel Apple always has been a priority customer. Honestly to me it felt more like Intel's increasing incompetence essentially forced Apple to move away from it, and I'm surprised that process took this long.

Haven't really had a chance to comment on the M1's performance yet since I've been busy all week.

Honestly, this is about the best case scenario that I could have wanted. The M1 is a beast no one could doubt, and extremely efficient. Shots were definitely fired across the bow of Intel, AMD and x86-64, and it shows that ARM microprocessors are potentially a huge threat when done right.

But given the M1's exceptional engineering and process node advantage, Zen 3 is still very competitive from a performance standpoint in single threaded performance and superior in multithreaded despite being one node behind. There's no reason to believe that Zen 4 would not significantly outperform the M1 across most workloads, and significantly tighten the gap when it comes to performance per watt and overall efficiency given a healthy boost in IPC while maintaining a strong advantage in clock speed. I'm intentionally not counting Intel as they likely will not catch up with AMD or Apple in performance per watt until they can get their 7nm process out.

At any rate, the emergence of the M1 is nowhere near a devastating blow to x86 as many ARM proponents suggested it would be. What this has shown though to me personally is that despite the fanfare surrounding ultra wide designs like the M1, CPUs with narrow microarchitectures like Zen 3 are just as relevant; with the former not being demonstratively superior to the latter. A repetitive refrain I always hear from engineers is that's it all about tradeoffs..
For AMD it's good to have a new competitor it can strive to catch up and eventually surpass, right when Intel was taken care of. ;)

Also, how do you guys think apple will handle gpus in their higher end macbooks and eventually desktops next year?
For me this was the disappointing part of M1: It gives no single clue as to what Apple's answer to your question will be. Apple could continue down the completely locked down route with zero expandability, essentially using mobile chips even in the high end desktop. It could go chiplets to achieve some scalability that can be used to segment products by performance with fewer distinct dies while rivaling other many cores desktop offerings. It could create a "desktop" version that adds all the I/O one would expect from a traditional desktop chip, 16x PCIe, NVMe, SATA etc. Right now the last option seems the unlikeliest of them to me. We will see.
 
  • Like
Reactions: Tlh97 and Carfax83

amrnuke

Golden Member
Apr 24, 2019
1,181
1,772
136
Just the battery life alone is a big incentive. Lack of fan noise is another one.
I think that's a reasonable thing - really depends on your priorities.

I wonder how many users are taxing the CPU enough with web browsing and Office to justify paying 25+% more to make it a little quieter and extend battery life from 13 to 18 hours. I value a great keyboard and track pad, and my XPS 15 has that in spades (though it's getting old in the tooth, and I've strongly considered the Macbook line for a couple of years now). And the screen on Macbooks tends to be pretty solid. I think as a whole package, one could justify that. I just don't think it makes it a "great" value. $1000 base model with 8GB memory and a 256GB SSD seems a bit much since a similar HP Envy (which is quite good quality) is like... $669. For a crappier package, Acer Swift 3 with a 4700U and a 512GB SSD is $629 on Newegg. Even a Zen2-based 8-core 16-thread with a dGPU in a laptop can be had for under $1000.

This is so much like buying a car. I know I don't need air conditioned perforated leather seats, but it's nice. Just like the keyboard/touchpad and screen on the Macbook line (for the most part). The value is very dependent on your wants and needs for sure.
 

Doug S

Platinum Member
Feb 8, 2020
2,201
3,405
136
Also, how do you guys think apple will handle gpus in their higher end macbooks and eventually desktops next year?
I'm going to guess Apple is not moving yet to their own GPUs in higher end parts. They seem to want the GPU to be packed into the SOC itself, and function as an integrated no matter how performant it is. They could make huge monolithic dies and bin them by GPU cores active, but at a certain point it just becomes far too expensive to produce for such a small market.
What I would guess is they are going for a chiplet approach to GPUs, but I think they would likely do that in late 2021/early 2022 with the M2 architecture. They don't seem to want to change everything at the same time, so I think they would first change the cpu, and keep discrete graphic options for those who need it, and then after changing cpus going after gpus if everything goes well on round one.

The rumored 8+4 core chip that would serve as the 'high end' for laptop type designs probably has double the GPU cores and my guess would be dedicated VRAM like GDDR6 on package instead of the LPDDR4x, and they'd add a traditional memory controller to interface with off chip RAM (though it might still be soldered to the board like it is in many low/mid range Intel Macs)

Such a design could be designed to also work in chiplet form for both CPU and GPU. The first 8+4 chips would never be used as chiplets in shipping Macs, but only inside Apple labs to help develop the drivers. Since Apple said the ARM transition would take two years look for the Mac Pro and iMac Pro to be made using chiplets (or possibly but unlikely monolithic chips) based on A16 cores fabbed with TSMC's N3 process in 2022.

With the extra transistors N3 buys them they could go from the M1's 8 core GPU to 32 cores (and obviously two generations better as well) in this 8+4 chip/chiplet for a total of 128 GPU cores in a four chiplet 32 CPU core top end Mac Pro. Would that be able to beat/meet performance levels of Nvidia and AMD's best discrete GPUs in 2022? Your guess is as good as mine. Actually your guess is probably better than mine since I really don't pay much attention to GPU performance so I don't know how M1's GPU compares to the current top discrete GPUs - is it 1/10th the performance? 1/20th? 1/30th? I have no clue.
 

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
What? No. Did you take a look inside the Mac mini? It's basically a tablet mobo without the screen with a giant fan and a power transformer.

Basically you're getting an iPad Pro with with better I/O and a little bit more RAM, but no screen at all. You can be sure they have a nice healthy margin on these.

Not exactly.

A $799 base model iPad Pro ($100 more than Mini) has an A12Z (older processor on 7nm) with 128GB storage (half the Mini's base model) and 6GB RAM (3/4 the Mini's RAM).

It also lacks the Mini's connectivity.

The Mini is $699 in base form which has twice the storage vs iPad Pro's 128GB, a newer / faster SoC, 33% more RAM, two USB-C / Thunderbolt 4 ports, two type A USB 3 ports, and an Ethernet port.

So in both cases, Apple's base price is pretty low in my view. But it's also impractical for most people. An 11" $799 iPad Pro with 512GB of storage costs $1099. A 12.9" iPad Pro with the same specs costs $1299. A $699 Mac Mini with 16GB RAM and 512GB storage costs $1099.

I don't think Apple makes a ton on either of these in base form. When you spec them up, they really make bank, that's obvious.

For the $400 difference from Mac Mini base to 16GB RAM / 512GB storage, on a PC Laptop I can get 32GB laptop DDR4, a 2TB m.2 drive, an external 2TB backup drive, and have $ left for a nice dinner for two.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
If you check the professional sites, they show the quality issue disappears at bitrates over ~6Mbit/s. That is actually a quite low bitrate. Most tests are done at something like 20 or 40 Mbit, where it makes no difference.

Do you know if there were large differences in the output file sizes? That can matter a great deal, because one of the main goals of encoding software is to produce smaller file sizes with the least amount of compromise possible.

But I have definitely heard that the hardware encoders in Turing class GPUs is much better than it was in the past in terms of quality.
 

Roland00Address

Platinum Member
Dec 17, 2008
2,196
260
126
Just a reminder that Apple only sells 10 million to 16 million iPads per quarter. (So roughly 40 to 50 million per year.) And almost all of those iPads are using the same SoC as an iphone and not an X or Z SoC that is different and has its separate mask.

Are you telling me that apple was not making money due to having "such an expensive SoC price" on the A # X or Z products? I do not take that seriously. Apple was making money on them and I think we can infer from that, that apple is paying the same or less for the Apple Silicon compared to a Y series.

We are going to get 20 million macs sold in a year, which is going to be far more than the equivalent amount of iPad Pros sold during the same year.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
It's crazy how so many people are still in denial phase. No x86 uarch will match Apple Silicon for PPW for years.

Who claimed that x86 would "match" Apple silicon in PPW? I definitely did not say that. I said that the gap would tighten a great deal when AMD comes out with Zen 4. Just having the I/O die on 7nm rather than the 12nm that it is currently on would have significant power savings.

And AMD seems confident they can wring out another 20% or so gain in IPC. If they do that while keeping around 5ghz boost clocks for single threaded workloads, it will blow the M1 out of the water in overall performance. How can I say this? Because Zen 3 already trades blows with the M1 in single threaded workloads and destroys it in multithreaded ones.

Then there is the absurd accelerator excuse. Ignoring that Apple CPU alone is matching the best from either AMD or Intel, the idea that accelerators are somehow 'bad' and 'unfair' completely ignores the paradigm shift in computing for decades. Accelerators ARE the future and there is great power to be achieved by them rather than counting on a general design such as a CPU do to everything. Its why Servers are increasingly moving to GPUs.

I don't know that accelerators are the future as you claim, because there are pros and cons to hardware acceleration.

Also, Intel is heavily leveraging it's AVX-512 instruction set with targeted instructions that accelerate AI, inferencing, machine learning etc and all the other stuff, so there are other ways of getting there without having to design and integrate custom hardware for these tasks.

But debating Apple PPW lead and uarch achievements is pure denial at this point. The benchmarks are there, their uarch has been developing for 10years on mobile and we know how performant and efficient it is. Just accept it and move on. And talking about wether future chips from intel or AMD would match it is not as smart as it seems: it would mean Apple is 1year ahead of the best x86 designs.

Again, I don't think I've seen anyone claiming that x86-64 would be able to match Apple silicon in terms of PPW; especially for single threaded performance. They don't really have to match them to be honest, they just have to get close enough.

Renoir at 15w can match and sometimes beat the M1 in multithreaded workloads and that's using the Zen 2 core which is over a year old at this point, and Zen 3 can match or beat the M1 in single threaded performance despite being hobbled by a 12nm I/O die which increases the package power and being on 7nm.
 
Last edited:
  • Like
Reactions: Tlh97

Eug

Lifer
Mar 11, 2000
23,583
996
126
Entry level M1 MacBook Pro vs previous entry level Intel MacBook Pro similarly configured at similar price points, tested side by side (split-screen video).


Not surprisingly, the entry level Intel MacBook Pro is destroyed.

The bigger difference to me though is that the Intel MacBook Pro got all hot and bothered, and screamed in despair (through its fan), while the M1 MacBook Pro's fan never became audible even once.

One stupid but telling test he did was to launch 50 apps simultaneously. The M1 remained smooth and completely usable for multitasking, and launched those apps quickly. The Intel did reasonably well initially with the app loads, but slowed down later with Final Cut, and the machine was completely unusable during this process. All its resources were dedicated to launching those apps.

EDIT:

It is the 2020 13-inch MacBook Pro with 1.4GHz quad-core Core i5-8257U processor, Intel Iris Plus Graphics 645, and 8GB RAM.
 
Last edited:
  • Like
Reactions: Tlh97

name99

Senior member
Sep 11, 2010
404
303
136
For me this was the disappointing part of M1: It gives no single clue as to what Apple's answer to your question will be. Apple could continue down the completely locked down route with zero expandability, essentially using mobile chips even in the high end desktop. It could go chiplets to achieve some scalability that can be used to segment products by performance with fewer distinct dies while rivaling other many cores desktop offerings. It could create a "desktop" version that adds all the I/O one would expect from a traditional desktop chip, 16x PCIe, NVMe, SATA etc. Right now the last option seems the unlikeliest of them to me. We will see.

Terence Winter - The first rule of show business is get...
 

Attachments

  • 1605829027591.jpeg
    1605829027591.jpeg
    12.7 KB · Views: 2
  • Like
Reactions: IntelCeleron

oak8292

Member
Sep 14, 2016
81
67
91
Intel's R&D isn't free either, and they also get huge margins. And it's not like M1 is a ground up design. Apple can spread that cost over Macs and the hundreds of millions of iOS devices it sells.

To tack on to this I think when you looking at die volume then Apple's is definitely cost advantaged.

The x86 eco-system sell around 60 million laptops per quarter and 20 million desktops per quarter or about 80 million and 240 million per year.

1605828718707.png


Intel was using 5 die to cover their consumer SKU's a number of years back and with increased competition with AMD there are a now a few more litho masks that are now used to cover this market.

Apple is selling somewhere around 240 million iPhone SOC (over three year life cycle of iPhones + misc. iPad, TV, HomePod, etc.) and rough estimate of 10+ million iPad Pro SOC with two mask sets. Adding the M1 devices will probably add another 10+ million devices and potentially a third mask set or maybe share with the iPad Pro SOC.

Bottom line,
Intel spreads core design costs over more cores, but Intel has more mask costs for a similar number of die.
Apple is sharing process development costs with numerous other TSMC customers.
Apple as a customer of TSMC shares equipment depreciation costs with all of the TSMC customers.
Apple is sharing EDA tools with numerous other TSMC customers (It is likely that EDA tools for TSMC are more fully developed on a node with so many customers banging on them.)
Apple could be purchasing third party IP for numerous blocks to lower costs, e.g. DDR and PCIe PHY etc.
 

nxre

Member
Nov 19, 2020
60
103
66
Who claimed that x86 would "match" Apple silicon in PPW? I definitely did not say that. I said that the gap would tighten a great deal when AMD comes out with Zen 4. Just having the I/O die on 7nm rather than the 12nm that it is currently on would have significant power savings
A Zen 3 core at peak spends 19W-20W. A Firestorm core at peak spends 5W. Ignoring the I/O die, apple cores are significantly more efficient at peak performance.
By the time Zen 4 comes out, it will be competing against M2.

Also, Intel is heavily leveraging it's AVX-512 instruction set with targeted instructions that accelerate AI, inferencing, machine learning etc and all the other stuff, so there are other ways of getting there without having to design and integrate custom hardware for these tasks.
AVX-512 doesnt come close to dedicated accelerators for ML. Its like asking a CPU to run Graphics... why? Sure you can run it but GPUs do it better. You can run ML code on CPU but dedicated accelerators run it better. In the end of the day, no one cares which part it runs. If its faster, its faster.

Renoir at 15w
Renoir runs much higher than 15W. 15W is just the advertised TDP, which means little to nothing as to how much the chip actually spends. Unfortunately, renoir laptops are also quite rare and finding information on how much power they use is a nightmare, but I assure you when its running multicore benchmarks it doesnt run at 15W to achieve that performance.
 
  • Like
Reactions: insertcarehere

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
So, if Apple follows their normal processor development cadence, we'll see an A15 late next year, as well as an A14X/Z. The A14X/Z should be on N5, and the A15 should be on N5P.

I wonder what the M series will develop like? IT seems obvious that next year will see some sort of M2 processor to handle the mid market products. IT's likely that it will be on N5, as Apple has often stuck with a node through multiple A series versions in the past recently. It is possible that, instead, it will be on N5P. With N5P, and Apple moving to DDR5 (Samsung and others are already making phones that use DDR5), they could probably offer a substantial uplift in CPU and iGPU performance. I suspect that they will just scale up what they already have. Doubling the size of the iGPU and going with 8 HP cores, and just switching to higher speed DDR5 with the same arrangement, or perhaps double the channels, they should be able to dig deep into the workstation performance bracket,
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,222
5,224
136
I think that's a reasonable thing - really depends on your priorities.

I wonder how many users are taxing the CPU enough with web browsing and Office to justify paying 25+% more to make it a little quieter and extend battery life from 13 to 18 hours. I value a great keyboard and track pad, and my XPS 15 has that in spades (though it's getting old in the tooth, and I've strongly considered the Macbook line for a couple of years now). And the screen on Macbooks tends to be pretty solid. I think as a whole package, one could justify that. I just don't think it makes it a "great" value. $1000 base model with 8GB memory and a 256GB SSD seems a bit much since a similar HP Envy (which is quite good quality) is like... $669. For a crappier package, Acer Swift 3 with a 4700U and a 512GB SSD is $629 on Newegg. Even a Zen2-based 8-core 16-thread with a dGPU in a laptop can be had for under $1000.

This is just a rehash of the old, "But, I can buy a cheaper Windows PC" lament.

Golf Clap...

So what? Mac was doing quite well before M1.

M1 improves them drastically, in nearly every way. They are obviously going to do MUCH better with this drastically improved product for the same price (or less).

So they won't be in the sub $700 laptop market, no loss, they never were, and they don't need to be.
 
  • Like
Reactions: Tlh97 and scannall

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
Do you know if there were large differences in the output file sizes? That can matter a great deal, because one of the main goals of encoding software is to produce smaller file sizes with the least amount of compromise possible.

But I have definitely heard that the hardware encoders in Turing class GPUs is much better than it was in the past in terms of quality.

I think the file size is a function of bitrate. Puget Systems did a series of tests linked below. The summary of export performance (speed) is also below.

They also address the quality issue, stating that you couldn't really see a difference except in low bitrate modes, unless you zoom in. If we're talking say iPhone 4k video, you can forget it. The 150Mbps source videos they used as example is impossible to create from an iPhone Pro Max, the max they record at is around 63Mbps. These are professional quality videos to start with.

What I'm getting at is, these issues of quality on videos is for people who have $3000 4K camcorders.

The rest of us making funny dog / cat videos with our phones, it's a non issue.


pic_disp.php
 

Eug

Lifer
Mar 11, 2000
23,583
996
126
Final Cut Pro on M1 continues to impress.
M1 MBP with 8 GB RAM vs 2019 12-core MP with 192 GB RAM and upgraded GPU.


Screen Shot 2020-11-19 at 9.41.40 PM.png

Screen Shot 2020-11-19 at 9.42.07 PM.png

Plus, the Mac Pro's timeline playback was all stuttery at that clip I linked, but the M1 played it perfectly.
 
  • Like
Reactions: IntelCeleron

amrnuke

Golden Member
Apr 24, 2019
1,181
1,772
136
A Zen 3 core at peak spends 19W-20W. A Firestorm core at peak spends 5W. Ignoring the I/O die, apple cores are significantly more efficient at peak performance.
By the time Zen 4 comes out, it will be competing against M2.
That's disingenuous. The M1 is designed for low power usage and long battery life. The 5950X (with the 20.6W single core power usage) is a 105W TDP chip. Expecting its single core power usage to be lower than the M1 is absurd, they're designed for totally different things. (Also, interesting you didn't pick the 5600X, which uses 11W at peak ST usage - do you have an agenda?). We have no good information on how Zen 3 cores scale down beyond the 5600X core which is still very much performance oriented rather than efficiency oriented, and it very well could be the case that AMD beats, is roughly equal to, or loses to Apple at that power threshold. We just don't know yet.

The M1 is a fantastically powerful and efficient chip. It doesn't need to stand up to Zen 3 anyway. M1 is part of an ecosystem. When people buy an MBA they aren't buying the M1, they're buying Apple.

AVX-512 doesnt come close to dedicated accelerators for ML. Its like asking a CPU to run Graphics... why? Sure you can run it but GPUs do it better. You can run ML code on CPU but dedicated accelerators run it better. In the end of the day, no one cares which part it runs. If its faster, its faster.
I agree. Technically, we could try to exclude all the accelerators, but that's silly. If you can design a good accelerator that makes performance for real-world tasks better, for less power usage, I see that as an absolute win for all.

Renoir runs much higher than 15W. 15W is just the advertised TDP, which means little to nothing as to how much the chip actually spends. Unfortunately, renoir laptops are also quite rare and finding information on how much power they use is a nightmare, but I assure you when its running multicore benchmarks it doesnt run at 15W to achieve that performance.
Renoir laptops aren't all that rare. I've seen them available for purchase at Best Buy, Costco, as well as online at Acer.com, HP.com, Newegg, etc.

As for power usage, yes, they use more power. The HP ProBook for instance with the 4500U uses 28W on average under load, and 48W as the absolute peak power usage (including the screen on max brightness while running Furmark and Prime95 at the same time). I'm not sure what Notebookcheck's test suite comprised of, but the Mac mini average MT workload usage was 26.5W and peak was 31W (without a screen). Keep in mind the Zen2 core is 1.5 years old, the GPU is going on 3 years old, the laptop in this comparison has a screen drawing power, we don't know if the test suites are equal w/r/t total power demand, etc. It's apples to oranges. But I'd say the Furmark + Prime95 test is pretty heavy duty.
 
Last edited:

Heartbreaker

Diamond Member
Apr 3, 2006
4,222
5,224
136
Final Cut Pro on M1 continues to impress.
M1 MBP with 8 GB RAM vs 2019 12-core MP with 192 GB RAM and upgraded GPU.


View attachment 34195

View attachment 34196

Plus, the Mac Pro's timeline playback was all stuttery at that clip I linked, but the M1 played it perfectly.

"But I read on the internet that 16GB was not enough RAM, so these machines were for only for casual internet use..." /s

I just watched a similar video where a guy was getting his mind blown comparing the base Macbook Air (8GB/256GB) vs his 32GB RAM i9 MBP

Maybe these machines can reset some thinking around spec sheet worship.
 

Eug

Lifer
Mar 11, 2000
23,583
996
126
"But I read on the internet that 16GB was not enough RAM, so these machines were for only for casual internet use..." /s

I just watched a similar video where a guy was getting his mind blown comparing the base Macbook Air (8GB/256GB) vs his 32GB RAM i9 MBP

Maybe these machines can reset some thinking around spec sheet worship.
Well, FWIW, later in the video, Davinci Resolve (native beta) was stuttering, but they recommend 16 GB minimum to run their software.

Dunno if it's related or not, but nonetheless, their base recommendation is 16 GB.

He's going to do a more exhaustive set of tests with a 16 GB M1 MBP later.
 

name99

Senior member
Sep 11, 2010
404
303
136
A Zen 3 core at peak spends 19W-20W. A Firestorm core at peak spends 5W. Ignoring the I/O die, apple cores are significantly more efficient at peak performance.
By the time Zen 4 comes out, it will be competing against M2.

Or M3...
Zen to Zen2 was ~2.5 years.
Zen2 to Zen3 was ~1.5 years

There MIGHT be a Zen3+ competing against the A15, but I'd expect for most of its lifetime, except perhaps for a few months, Zen4 will be competing against A16.
 

name99

Senior member
Sep 11, 2010
404
303
136
So, if Apple follows their normal processor development cadence, we'll see an A15 late next year, as well as an A14X/Z. The A14X/Z should be on N5, and the A15 should be on N5P.

I wonder what the M series will develop like? IT seems obvious that next year will see some sort of M2 processor to handle the mid market products. IT's likely that it will be on N5, as Apple has often stuck with a node through multiple A series versions in the past recently. It is possible that, instead, it will be on N5P. With N5P, and Apple moving to DDR5 (Samsung and others are already making phones that use DDR5), they could probably offer a substantial uplift in CPU and iGPU performance. I suspect that they will just scale up what they already have. Doubling the size of the iGPU and going with 8 HP cores, and just switching to higher speed DDR5 with the same arrangement, or perhaps double the channels, they should be able to dig deep into the workstation performance bracket,

TSMC started N5P testing/Risk production in 2Q2020.
Meaning that (and now that I think about it, it is so obvious!) the M1X will be on N5P!
THAT is what's determining its schedule.

There is a precedent. Remember that the A10 came out in Sept 2016, on 16nm. But the A10X came out in June 2017 on 10nm.

This makes so much sense. It gives time Apple to improve some of the rushed bits of the M1, and gives a free 5% speed boost (which isn't much, sure, but may mean that M1X clocks at, say 3.5GHz)
And gives Apple a second round of (totally justified!) "OMG, Apple is king, everyone else is doomed" publicity, say maybe in April or May, which should sustain them till the A15/iPhone next reveals in September.
 

amrnuke

Golden Member
Apr 24, 2019
1,181
1,772
136
This is just a rehash of the old, "But, I can buy a cheaper Windows PC" lament.

Golf Clap...
The conversation was about whether $1000 is a great value for what you get from an MBA. If you don't want to participate in that discussion, and instead want to pigeon-hole people into camps, put words in their mouths, and then attack them for being in that camp and taking stances they're not taking, and be combative and oppositional -- whatever, enjoy your adolescence. If you want to discuss why you think $1000 is a great value for an MBA in the context of the current laptop market, then please redirect your response in that direction.

So what? Mac was doing quite well before M1.
Did I ever claim it wasn't?

M1 improves them drastically, in nearly every way.
Did I ever claim that it doesn't?

They are obviously going to do MUCH better with this drastically improved product for the same price (or less).
Did I ever claim that they wouldn't?

So they won't be in the sub $700 laptop market, no loss, they never were, and they don't need to be.
Did I ever say they would be, should be, claim to be, or need to be?

Who are you even talking to? The amount of strawman non-sequitor here is mind-boggling.
 
Last edited:

amrnuke

Golden Member
Apr 24, 2019
1,181
1,772
136
Or M3...
Zen to Zen2 was ~2.5 years.
Zen2 to Zen3 was ~1.5 years

There MIGHT be a Zen3+ competing against the A15, but I'd expect for most of its lifetime, except perhaps for a few months, Zen4 will be competing against A16.
The macOS Arm environment should be better with more native apps, etc. next year, and I think it's possible they put out an M2 with the same core as the A15 and on N5P, but wouldn't be surprised if all they release next year is an M1X 8+4, even if still on N5, for the remainder of the MBP lineup -- and do a September/October 2022 refresh of all lines with whatever core powers the A16 across the board.

Regarding competing cores from AMD, Zen2 -> Zen2 refresh was about a year, I don't think it'd be unreasonable, given that AMD have been on N7 since mid 2019, that they could push out Zen3+/refresh on N5 next summer or perhaps sooner. It sure seems like all these releases are getting really staggered, and at any given moment, AMD or Apple or Intel have the lead in something different.

In any case, I thought this was interesting. The M1 has the ST crown in CB23 for low power chips - with a 26% lead over the 4800U (15W). If we assume Zen2->Zen3 performance difference in CB20 (+23%) carries over to mobile on same TDP, Cezanne would be within 3% of the M1. However, that's with a node disadvantage (N5 vs N7, 15% speed or 30% power).

One thing is for sure - a lot of this is speculation. But I think if you throw Zen3 and Firestorm on the same process, and with similar power limits, it could be interesting. We'll have to see. AMD are a step behind, with their laptop APUs lagging desktop by a few months as usual, and a step behind on node.

But even if Cezanne proves quite competitive, I don't see why it would change Apple's overall strategy. The M1 will still be competitive, and very well may remain the mobile king, even after Cezanne is released.
 

beginner99

Diamond Member
Jun 2, 2009
5,208
1,580
136
M1 improves them drastically, in nearly every way. They are obviously going to do MUCH better with this drastically improved product for the same price (or less).

The product also is drastically worse in some aspects like not supporting boot camp and since that was an official thing, it seemed to matter to some users. the people that actual look at benchmarks and understand the m1 implications can go either way: choose it because of drastic advantage or move away because of the drastic disadvantage.

Maybe these machines can reset some thinking around spec sheet worship.

Maybe but let's be honest. Final cut pro is their main piece to show how cool the m1 is. We have no iead how much dedicated hardware it uses and that will always be faster than using a CPU for the same task. All it shows is that having a fixed set of hardware and custom made software leads to superior results. which is not surprising really.