Discussion Apple Silicon SoC thread

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eug

Lifer
Mar 11, 2000
23,561
982
126
M1
5 nm
Unified memory architecture - LP-DDR4
16 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 12 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache
(Apple claims the 4 high-effiency cores alone perform like a dual-core Intel MacBook Air)

8-core iGPU (but there is a 7-core variant, likely with one inactive core)
128 execution units
Up to 24576 concurrent threads
2.6 Teraflops
82 Gigatexels/s
41 gigapixels/s

16-core neural engine
Secure Enclave
USB 4

Products:
$999 ($899 edu) 13" MacBook Air (fanless) - 18 hour video playback battery life
$699 Mac mini (with fan)
$1299 ($1199 edu) 13" MacBook Pro (with fan) - 20 hour video playback battery life

Memory options 8 GB and 16 GB. No 32 GB option (unless you go Intel).

It should be noted that the M1 chip in these three Macs is the same (aside from GPU core number). Basically, Apple is taking the same approach which these chips as they do the iPhones and iPads. Just one SKU (excluding the X variants), which is the same across all iDevices (aside from maybe slight clock speed differences occasionally).

EDIT:

Screen-Shot-2021-10-18-at-1.20.47-PM.jpg

M1 Pro 8-core CPU (6+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 16-core GPU
M1 Max 10-core CPU (8+2), 24-core GPU
M1 Max 10-core CPU (8+2), 32-core GPU

M1 Pro and M1 Max discussion here:


M1 Ultra discussion here:


M2 discussion here:


Second Generation 5 nm
Unified memory architecture - LPDDR5, up to 24 GB and 100 GB/s
20 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 16 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache

10-core iGPU (but there is an 8-core variant)
3.6 Teraflops

16-core neural engine
Secure Enclave
USB 4

Hardware acceleration for 8K h.264, h.264, ProRes

M3 Family discussion here:

 
Last edited:
Solution
The reason Macs don’t have them is because Qualcomm charges Apple by the SKU entry price or whatever. It’d be insanely expensive and it’s too niche.

name99

Senior member
Sep 11, 2010
404
303
136
Baldur's Gate is native. Though probably not fully optimized given the limited time available.
Some details here, along with a number of scenes.
As a non-gamer I have no idea what to look for.


One thing that is clear in this talk is that Apple are positioning the M1 as an iGPU. ie the point is not "we are better than some 350W monster" it is "we are an iGPU that can do stuff that prevously demanded a dGPU".
I noticed the same thing when I had a chance to watch the Apple event directly (not a liveblog), Apple specifically said "M1 has been optimized for our most popular, low-power systems", along with a bunch of other such reminders that this is just step one, that this is us not even trying for maximum performance, only for low power and cost...
 

ultimatebob

Lifer
Jul 1, 2001
25,135
2,445
126
OK, I went through all the marketing blurbs and matched the claims with the machines compared:

Up to 3.9X faster video processing
Up to 3.5X faster CPU performance

Testing conducted by Apple in October 2020 using preproduction MacBook Air systems with Apple M1 chip and 8-core GPU, as well as production 1.2GHz quad-core Intel Core i7-based MacBook Air systems, all configured with 16GB RAM and 2TB SSD. Tested with prerelease Final Cut Pro 10.5 using a 55-second clip with 4K Apple ProRes RAW media, at 4096x2160 resolution and 59.94 frames per second, transcoded to Apple ProRes 422. Performance tests are conducted using specific computer systems and reflect the approximate performance of MacBook Air.

Up to 7.1X faster image processing
Testing conducted by Apple in October 2020 using preproduction Mac mini systems with Apple M1 chip, and production 3.6GHz quad-core Intel Core i3-based Mac mini systems, all configured with 16GB of RAM and 2TB SSD. Prerelease Adobe Lightroom 4.1 tested using a 28MB image. Performance tests are conducted using specific computer systems and reflect the approximate performance of Mac mini.

Our high‑performance core is the world’s fastest CPU core when it comes to low‑power silicon.
Testing conducted by Apple in October 2020 using preproduction 13‑inch MacBook Pro systems with Apple M1 chip and 16GB of RAM measuring peak single-thread performance of workloads taken from select industry-standard benchmarks, commercial applications, and open source applications. Comparison made against the highest-performing CPUs for notebooks commercially available at the time of testing. Performance tests are conducted using specific computer systems and reflect the approximate performance of MacBook Pro.

Up to 2X faster CPU performance
Matches peak PC performance using 25% of the power

Testing conducted by Apple in October 2020 using preproduction 13‑inch MacBook Pro systems with Apple M1 chip and 16GB of RAM. Multithreaded performance measured using select industry‑standard benchmarks. Comparison made against latest‑generation high‑performance notebooks commercially available at the time of testing. Performance tests are conducted using specific computer systems and reflect the approximate performance of MacBook Pro.

3X CPU performance per watt
Testing conducted by Apple in October 2020 using preproduction 13‑inch MacBook Pro systems with Apple M1 chip and 16GB of RAM, as well as previous‑generation Mac notebooks. Performance measured using select industry‑standard benchmarks. Performance tests are conducted using specific computer systems and reflect the approximate performance of MacBook Pro.

From teraflops to texture bandwidth to fill rate to power efficiency, this GPU is in a class of its own — and brings the world’s fastest integrated graphics in a personal computer.
Testing conducted by Apple in October 2020 using preproduction 13‑inch MacBook Pro systems with Apple M1 chip and 16GB of RAM using select industry-standard benchmarks. Comparison made against the highest-performing integrated GPUs for notebooks and desktops commercially available at the time of testing. Integrated GPU is defined as a GPU located on a monolithic silicon die along with a CPU and memory controller, behind a unified memory subsystem. Performance tests are conducted using specific computer systems and reflect the approximate performance of MacBook Pro.

Up to 2X faster GPU performance
Matches peak PC performance using 33% of the power

Testing conducted by Apple in October 2020 using preproduction 13‑inch MacBook Pro systems with Apple M1 chip and 16GB of RAM. Performance measured using select industry‑standard benchmarks. Comparison made against latest‑generation high‑performance notebooks commercially available at the time of testing. Performance tests are conducted using specific computer systems and reflect the approximate performance of MacBook Pro.

Up to 15X faster machine learning performance
Testing conducted by Apple in October 2020 using preproduction Mac mini systems with Apple M1 chip, and production 3.6GHz quad-core Intel Core i3-based Mac mini systems, all configured with 16GB of RAM and 2TB SSD. Prerelease Pixelmator Pro 2.0 Lynx tested using a 216KB image. Performance tests are conducted using specific computer systems and reflect the approximate performance of Mac mini.

Up to 17 hrs of wireless web browsing
Up to 20 hrs of movie playback

Testing conducted by Apple in October 2020 using preproduction 13‑inch MacBook Pro systems with Apple M1 chip, 8GB of RAM, and 512GB SSD. The wireless web test measures battery life by wirelessly browsing 25 popular websites with display brightness set to 8 clicks from bottom. The Apple TV app movie playback test measures battery life by playing back HD 1080p content with display brightness set to 8 clicks from bottom. Battery life varies by use and configuration. See apple.com/batteries for more information.

Up to 15 hrs of wireless web browsing
Up to 18 hrs of movie playback

Testing conducted by Apple in October 2020 using preproduction MacBook Air systems with Apple M1 chip and 8-core GPU, configured with 8GB of RAM and 512GB SSD. The wireless web test measures battery life by wirelessly browsing 25 popular websites with display brightness set to 8 clicks from bottom. The Apple TV app movie playback test measures battery life by playing back HD 1080p content with display brightness set to 8 clicks from bottom. Battery life varies by use and configuration. See apple.com/batteries for more information.

Run up to 3x more instrument and effect plug‑ins with Logic Pro.
Testing conducted by Apple in October 2020 using preproduction Mac mini systems with Apple M1 chip, and production 3.6GHz quad-core Intel Core i3-based Mac mini systems, all configured with 16GB of RAM and 2TB SSD. Tested with prerelease Logic Pro 10.6.0 with project consisting of multiple tracks, each with an Amp Designer plug-in instance applied. Individual tracks were added during playback until CPU became overloaded. Performance tests are conducted using specific computer systems and reflect the approximate performance of Mac mini.

Fly through tasks with Final Cut Pro, like rendering a complex timeline up to 6x faster.
Testing conducted by Apple in October 2020 using preproduction Mac mini systems with Apple M1 chip, and production 3.6GHz quad-core Intel Core i3-based Mac mini systems with Intel Iris UHD Graphics 630, all configured with 16GB of RAM and 2TB SSD. Tested with prerelease Final Cut Pro 10.5 using a complex 2-minute project with a variety of media up to 4K resolution. Performance tests are conducted using specific computer systems and reflect the approximate performance of Mac mini.

Yup, Apple is back to its old benchmarking tricks that it used from the Power PC era. You'll get "3x" better performance using special plugins and benchmarks that are optimized for the new processor, but you'll probably see little of that performance boost in your usual day-to-day usage. If anything, I'll bet that gaming performance is going to suffer compared to similarly priced laptops that have a dedicated GPU and graphics memory.

I'll be curious to read the Ars Technica review of these products once they come out, because they'll look past the hype and run real benchmarks for products that most Apple users actually use on a daily basis.
 
Last edited:

name99

Senior member
Sep 11, 2010
404
303
136
I think you misunderstand me. I think that M1 is likely very strong. It might not beat Renoir at everything MT or Tiger Lake at everything ST, but as an entire package my intuition says that this should be the best laptop chip out there by a good margin. Cezanne might upset that a bit, but will be disadvantaged by 7nm vs 5nm.

I'm not saying that Apple resorted to marketing BS because their product is weak. I'm saying I think the product is strong, but present the product in a way that reeks of snake oil makes it look weaker than it likely is since many people are primed to automatically discount this kind of marketing.

You are welcome to your opinions about how something should be marketed. But I suspect the trillion dollar company know rather more about this than you do.
Apple are not targeting this event at you, stop pretending they are. If you want tech details, go read articles like Anandtech.

At the end of the day, this is whining for the sake of whining. The info you want is available, you're just demanding that Apple prioritize your interests over their market.

This is a constant theme in my irritation today -- this entitled view that an event that will be watched by, I don't know, half a billion people?, should be structured to appeal to the particular interests of a few hundred thousand or less.
I ALSO want super detailed info. The difference is I neither believe Apple OWES ME that info, nor do I believe that I represent some sort of majority of buyers who make their decisions based on whether the ROB is 560 vs 640 entries in size.
 
  • Like
Reactions: scannall

name99

Senior member
Sep 11, 2010
404
303
136
Yup, Apple is back to its old benchmarking tricks that it used from the Power PC era. You'll get "3x" better performance using special plugins and benchmarks that are optimized for the new processor, but you'll probably see little of that performance boost in your usual day-to-day usage. If anything, I'll bet that gaming performance is going to suffer compared to similarly priced laptops that have a dedicated GPU and graphics memory.

Probably so. Given that they explicitly say the comparison market is laptops with an iGPU, I don't think this is knockout blow you seem to imagine it to be...
You seem to be upset that someone's selling you a bicycle that they claim is very fast, whereas you can buy a motorbike and go even faster!

When Apple sells what they claim to be a dedicated, best of breed, gaming machine, perhaps THEN is the time to start making comparisons against other gaming machines?
 

HurleyBird

Platinum Member
Apr 22, 2003
2,669
1,248
136
You are welcome to your opinions about how something should be marketed. But I suspect the trillion dollar company know rather more about this than you do.

Argument from authority. And no, just being big and successful doesn't make one right. There are people on this forum who, if you had plucked them up and put them in the role of Intel CEO six years ago, would likely have done a much better job.

I ALSO want super detailed info. The difference is I neither believe Apple OWES ME that info, nor do I believe that I represent some sort of majority of buyers who make their decisions based on whether the ROB is 560 vs 640 entries in size.

Strawman. I'm saying that I think the average consumer will respond better to, say if you're 20% faster, actually showing that (eg. as AMD does at their events), rather than presenting incredibly generic graphs that are obvious approximations, combined with stratospheric claims that are obviously extremely cherry picked if not outright rigged. Consumers don't need to be technically savvy. Everyone has experienced snake oil, and the way Apple is marketing M1 just reeks of it. The inference is instinctual.
 
Last edited:

ultimatebob

Lifer
Jul 1, 2001
25,135
2,445
126
Probably so. Given that they explicitly say the comparison market is laptops with an iGPU, I don't think this is knockout blow you seem to imagine it to be...
You seem to be upset that someone's selling you a bicycle that they claim is very fast, whereas you can buy a motorbike and go even faster!

When Apple sells what they claim to be a dedicated, best of breed, gaming machine, perhaps THEN is the time to start making comparisons against other gaming machines?

On the flip side, do you see a lot of professional video editors editing their 4K films on low end 13" MacBook Pro with Final Cut Pro? Of course not, the 13" screen and 16 GB memory cap would have been total dealbreakers for them. They'll take their footage home and edit it on a Mac Pro with a 4K display and over 4 times that much memory.

If you're going to show me Macbook benchmarks, show me benchmarks for Safari or iPhoto. You know, products I would actually use on a laptop like that.
 
  • Like
Reactions: Tlh97

Eug

Lifer
Mar 11, 2000
23,561
982
126
Yup, Apple is back to its old benchmarking tricks that it used from the Power PC era. You'll get "3x" better performance using special plugins and benchmarks that are optimized for the new processor, but you'll probably see little of that performance boost in your usual day-to-day usage. If anything, I'll bet that gaming performance is going to suffer compared to similarly priced laptops that have a dedicated GPU and graphics memory.
Actually, I find the Logic Pro and Final Cut tests interesting, the latter particularly. Why? Cuz if it bears out with end-user testing, it will mirror the results people have had comparing video editing on the iPad Pro vs. the MacBook Air and MacBook Pro.

For a lot of stuff, the iPad Pro does extremely well, probably because Apple purposely designed the SoC to handle this sort of stuff.


iPad-Pro-editing-performance-1.jpg

iPad-Pro-editing-performance-2.jpg

The main problem in the past is stuff like Final Cut didn't exist on Arm. Now it does, just not on the iPad Pro, but on Macs instead.


On the flip side, do you see a lot of professional video editors editing their 4K films on 13" MacBook Pro with Final Cut Pro? Of course not, the 13" screen and 16 GB memory cap would have been total dealbreakers for them. They'll take their footage home and edit it on a Mac Pro with a 4K display and over 4 times that much memory.
You mistake Final Cut users as necessarily "pro". There are a TON of Final Cut users who don't work on Hollywood blockbusters. I would guess Final Cut users on 13" MacBook Pros outnumber Final Cut users on Mac Pros by a ratio of 50:1.

BTW, a friend of mine who is a design lead who has worked on stuff for major international banks and huge multinational companies does all his design work on a 2014 iMac. Even though he is actual a real "pro", he won't be buying a Mac Pro or even an iMac Pro either.
 
Last edited:

name99

Senior member
Sep 11, 2010
404
303
136
On the flip side, do you see a lot of professional video editors editing their 4K films on low end 13" MacBook Pro with Final Cut Pro? Of course not, the 13" screen and 16 GB memory cap would have been total dealbreakers for them. They'll take their footage home and edit it on a Mac Pro with a 4K display and over 4 times that much memory.

If you're going to show me Macbook benchmarks, show me benchmarks for Safari or iPhoto. You know, products I would actually use on a laptop like that.

"Professional video editors", no.
TikTok editors, yes. And they are the target market...

They said Safari is "1.9x more responsive". But I expect now you're pissed off because you don't know what that means exactly...

How about

"
Browsing with Safari — which is already the world’s fastest browser — is now up to 1.5x speedier at running JavaScript and nearly 2x more responsive.6
"

“World’s fastest browser”: Testing conducted by Apple in August and October 2020 using JetStream 2, MotionMark 1.1, and Speedometer 2.0 performance benchmarks on browsers that completed the test. Tested with prerelease Safari 14 and latest stable versions of Chrome, Firefox, and (Windows) Microsoft Edge at the time of testing, on Intel Core i5-based 13-inch MacBook Pro systems with prerelease macOS Big Sur and Windows 10 Home running in Boot Camp; 12.9-inch iPad Pro (4th generation) units with prerelease iPadOS 14 and Intel Core i7-based Microsoft Surface Pro 7 systems with Windows 10 Pro; and iPhone 11 Pro Max with prerelease iOS 14 and Samsung Galaxy S20 Ultra with Android 10. Devices tested with a WPA2 Wi-Fi network connection. Performance will vary based on usage, system configuration, network connection, and other factors. “Up to 1.5x speedier at running JavaScript and nearly 2x more responsive”: Testing conducted by Apple in September and October 2020 using JetStream 2 and Speedometer 2.0 performance benchmarks. Tested on preproduction MacBook Air and Mac mini systems with Apple M1 chip and 8-core GPU, as well as production 1.2GHz quad-core Intel Core i7-based 13-inch MacBook Air systems and 3.6GHz quad-core Intel Core i3-based Mac mini systems, all configured with 16GB RAM, 2TB SSD, and prerelease macOS Big Sur. Tested with prerelease Safari 14.0.1 and WPA2 Wi-Fi network connection. Performance will vary based on system configuration, network configuration, network connection, and other factors.
 

ultimatebob

Lifer
Jul 1, 2001
25,135
2,445
126
Probably so. Given that they explicitly say the comparison market is laptops with an iGPU, I don't think this is knockout blow you seem to imagine it to be...
You seem to be upset that someone's selling you a bicycle that they claim is very fast, whereas you can buy a motorbike and go even faster!

The problem with this line of thinking is that that most competitive laptops with an iGPU cost far less than a 13" MacBook Pro would. Once you get up to $1,300, a dedicated GPU is something that should be expected in a comparable Windows laptop.
 
  • Like
Reactions: KompuKare

ultimatebob

Lifer
Jul 1, 2001
25,135
2,445
126
Argument from authority. And no, just being big and successful doesn't make one right. There are people on this forum who, if you had plucked them up and put them in the role of Intel CEO six years ago, would likely have done a much better job.



Strawman. I'm saying that I think the average consumer will respond better to, say if you're 20% faster, actually showing that (eg. as AMD does at their events), rather than presenting incredibly generic graphs that are obvious approximations, combined with stratospheric claims that are obviously extremely cherry picked if not outright rigged. Consumers don't need to be technically savvy. Everyone has experienced snake oil, and the way Apple is marketing M1 just reeks of it. The inference is instinctual.

The sad thing about dubious benchmarks like this is that the mainstream media will repeat them as gospel without fact checking them. Sure, the tech blogs and magazines will eventually review them and release unbiased benchmarks, but by then the damage is done.

The result is uneducated consumers going to the Apple Store and buying a pretty but overpriced sub notebook, thinking that they're really getting something that's "98% faster than most new PC's". Odds are they'll probably never know otherwise, because it will still blow the doors off of the 10-year-old Windows 7 PC with a malware infection that they're using now.
 

Eug

Lifer
Mar 11, 2000
23,561
982
126
The problem with this line of thinking is that that most competitive laptops with an iGPU cost far less than a 13" MacBook Pro would. Once you get up to $1,300, a dedicated GPU is something that should be expected in a comparable Windows laptop.
You could make the same argument about Intel Macs too, but that didn't stop them from being purchased.

MacBook Airs with el crappo integrated Intel graphics sold like hotcakes this year. And the M1 MacBook Air is cheaper.


The sad thing about dubious benchmarks like this is that the mainstream media will repeat them as gospel without fact checking them.
It seems you haven't be reading the tech articles or watching the YouTube summaries. Most of them are NOT doing as you say.
 

awesomedeluxe

Member
Feb 12, 2020
69
23
41
@Eug I guess we finally have our answer - Apple didn't go with the A14 in the Air but gave it a best-in-class part instead. I have no doubt the MBA will be the best fanless notebook ever made and by a large margin.

On the other hand, I can feel the aura of disappointment from MBP13 fans. Even though the M1 is doubtless positioned to outperform a 28W Tiger Lake part (and by a lot!), it's clear this part was made for the Air and modified for the MBP.

It feels inevitable that we will see a MBP14 and MBP16 next year with an M1X, sporting 8 perf cores, 12 GPU cores, and LPDDR5. Given how conservative Apple was with this design, I'm not holding out a lot of hope Apple has something cool and creative prepared that can take down the 5600M in the MBP16--they'll just keep selling the Intel model until they do.
 

ultimatebob

Lifer
Jul 1, 2001
25,135
2,445
126
You could make the same argument about Intel Macs too, but that didn't stop them from being purchased.

MacBook Airs with el crappo integrated Intel graphics sold like hotcakes this year. And the M1 MacBook Air is cheaper.

It seems you haven't be reading the tech articles or watching the YouTube summaries. Most of them are NOT doing as you say.

I was talking about your typical 6 PM TV newscast that your parents would watch, which will just regurgitate the Apple press release verbatim without attempting to figure out what those numbers mean.

I'm sure that the smarter YouTube tech reviewers like Linus Tech Tips and Gamers Nexus aren't going to fall for those benchmarking tricks. They benchmark stuff for a living, and this isn't their first rodeo reviewing tech products.
 
  • Like
Reactions: Tlh97

awesomedeluxe

Member
Feb 12, 2020
69
23
41
i would wait for third party benchmarks (and other than SPEC and geekbench)
That skepticism is understandable. The reality is that the M1 will outperform Tiger Lake by a lot on native apps, which will be few and far between at launch.

I have no reservations about the M1's performance in theory, though.
 

name99

Senior member
Sep 11, 2010
404
303
136
The problem with this line of thinking is that that most competitive laptops with an iGPU cost far less than a 13" MacBook Pro would. Once you get up to $1,300, a dedicated GPU is something that should be expected in a comparable Windows laptop.

The claim is not that M1 can't compete with any dGPU; it's that it isn't trying to compete with the *best possible* dGPUs.

We'll have to wait for review models, but let's be reasonable about what the claims are and are not.
 

name99

Senior member
Sep 11, 2010
404
303
136
That skepticism is understandable. The reality is that the M1 will outperform Tiger Lake by a lot on native apps, which will be few and far between at launch.

I have no reservations about the M1's performance in theory, though.

How many native apps do you demand? On what time scale?
Safari and XCode will be native day one, that covers a lot of people. Adobe says Light Room next month, Photoshop early next year. Based on prior transitions Mathematica (an app of particular interest to me) will probably be available before the end of the year. etc etc.

People said this (where will the native apps be) about the PPC transition.
Then about The OSX transition.
Then about the x86 transition.
Then about the 64-bit transitions (both x86 and ARM).
In every case it was a big nothingburger.
 

Eug

Lifer
Mar 11, 2000
23,561
982
126
I was talking about your typical 6 PM TV newscast that your parents would watch, which will just regurgitate the Apple press release verbatim without attempting to figure out what those numbers mean.

I'm sure that the smarter YouTube tech reviewers like Linus Tech Tips and Gamers Nexus aren't going to fall for those benchmarking tricks. They benchmark stuff for a living, and this isn't their first rodeo reviewing tech products.
Your typical MacBook Air consumer generally isn't going to be watching sites like those. Hell, I even don't bother watching LTT or Gamers Nexus most of the time, because I have no interest in gaming builds and the like. Different target market.

I'm talking about more mainstream tech sites like CNET, but they too were appropriate. They expressed optimism, but reserved judgement until M1 hardware arrives in their hands. Maybe you're right about 6 pm newscasts with their 15 second blurbs, but that's a different kettle of fish.


That skepticism is understandable. The reality is that the M1 will outperform Tiger Lake by a lot on native apps, which will be few and far between at launch.

I have no reservations about the M1's performance in theory, though.
Speaking about performance measures...

I now think CPU-Monkey's A14X Geekbench numbers are just bogus. Cuz they now have M1 listed, but the Geekbench 5 scores are IDENTICAL to A14X. Not just similar, but identical.



How many native apps do you demand? On what time scale?
Safari and XCode will be native day one, that covers a lot of people. Adobe says Light Room next month, Photoshop early next year. Based on prior transitions Mathematica (an app of particular interest to me) will probably be available before the end of the year. etc etc.

People said this (where will the native apps be) about the PPC transition.
Then about The OSX transition.
Then about the x86 transition.
Then about the 64-bit transitions (both x86 and ARM).
In every case it was a big nothingburger.
Xcode, Final Cut, and Adobe Lightroom are going to be very important, because they are apps people on 13" MacBook Pros actually commonly use.

And I think important measure here are not just compiling performance or media export performance. It's going to be stuff like fluidity of the application interface, too.

For example in apps like LumaFusion on the 2018 iPad Pro, scrubbing is quite smooth even with more than one 4K HEVC asset overlaid, and in many tests stays smooth longer than most recent MacBook Pros with other editing apps the more assets and effects you pile on. It would be quite the accomplishment if they could get the same sort of differential UI and scrubbing performance in Final Cut on the Arm M1 Macs vs. those same MacBook Pros.
 
Last edited:
Apr 30, 2020
68
170
76
I really don't like the comparisons Apple was making. It's clear their chip is pretty good, but I think their over-inflating of the numbers will bite them. They're comparing their new latest and greatest CPU to Intel's last-gen that was infamously not super great. No comparisons were made to Tiger Lake, and they have tons of comparisons to i3 CPUs. They didn't have a single Renoir comparison either.

The 4-core Ice Lake i7 in the Macbook Air they used can barely wheez out 1.2 GHz at 10w, while Renoir configured to 10w can make it's 8 cores "scream" up near 1.85 GHz in comparison. https://docs.google.com/spreadsheets/d/1svDb5U2xtju1_sn1pB4hLzeJX3QySIOwf8w2vKyJvD8/edit#gid=0 . I think that ~"3.5x faster" would probably shrink to around 1.4-1.5x in that comparison. Which is still really impressive, but also sort of expected with a brand new chip on a new node vs. an architecture that's pushing almost 1.5 years old now, and a chip that's nearly a year old. I think they avoided any AMD comparisons because it sort of puts the spotlight on how uncompetitive their Intel-only offerings have been. It wouldn't surprise me if the progress AMD has been making with x86 hasn't made Apple 2nd guess themselves. Of course, the M1 SOC would have begun design work many years ago - probably about when AMD dropped Zen 1.

The other important thing to think about is software optimization. It's not exactly a secret that modern programs (and programmers) absolutely squander the massive processing ability of modern CPUs with sloppy code, bloated libraries and compressed timelines that prevent any kind of real optimization. But Apple has a HUGE incentive to work with developers to make sure the initial ARM versions of these major "Desktop" programs are hyper optimized. It really wouldn't surprise me if these first programs have nearly bespoke ARM source instead of the same source just being compiled for two different architectures. My gut feeling is that later "universal apps" that Apple hasn't had months to tweak and modify probably won't have quite the same performance disparity.
 
Last edited:
  • Like
Reactions: KompuKare and Tlh97

Eug

Lifer
Mar 11, 2000
23,561
982
126
I really don't like the comparisons Apple was making. It's clear their chip is pretty good, but I think their over-inflating of the numbers will bite them. They're comparing their new latest and greatest CPU to Intel's last-gen that was infamously not super great. No comparisons were made to Tiger Lake, and they have tons of comparisons to i3 CPUs. They didn't have a single Renoir comparison either.

The 4-core Ice Lake i7 in the Macbook Air they used can barely wheez out 1.2 GHz at 10w, while Renoir configured to 10w can make it's 8 cores "scream" up near 1.85 GHz in comparison. https://docs.google.com/spreadsheets/d/1svDb5U2xtju1_sn1pB4hLzeJX3QySIOwf8w2vKyJvD8/edit#gid=0
They are comparing their new low end Macs to the previous generation low end Macs they replaced. Remember, M1 is their latest, but it certainly won't be their greatest, when it comes to their Mac lines. M1 is entry level.

As bad as that previous MacBook Air might perform, that's was actually the previous MacBook Air they sold.

As such, Renoir is completely irrelevant in that context.
 
Apr 30, 2020
68
170
76
They are comparing their new low end Macs to the previous generation low end Macs they replaced. Remember, M1 is their latest, but it certainly won't be their greatest, when it comes to their Mac lines. M1 is entry level.

As bad as that previous MacBook Air might perform, that's was actually the previous MacBook Air they sold.

As such, Renoir is completely irrelevant in that context.
That would be fair if the only thing they were comparing to were their own products. But they're not. They're comparing to a variety of different products, including Macs running Windows under bootcamp, the "most popular Windows laptops", the "Fastest Windows Laptops", "98% of ALL laptops". I suppose for that specific comparison, Renoir is irrelevant, but the in the context of the entire presentation - it certainly is not.

The fact that they really didn't name names in any of their comparisons is telling enough, IMO. AMD, nvidia, and Intel have no problem at all directly calling out which chips they are comparing to. Apple isn't (except when it's their own machine, and even then it's still vague).
 
  • Like
Reactions: Tlh97

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
It doesn't really matter if they are making comparisons with older macs. The differences are so titanic if it used the x86 ISA it would instantly obsolete everything 45W and lower power.

You can tell from the CPU architecture overview that its a truly next generation part. Will it take AMD/Intel Zen 4/Golden Cove to match it or Zen 5/Redwood Cove? Will it take Alderlake, or even Remembrandt's GPU to match it or the one coming the year after?

And this is just in performance. The TDP level seems to be in the 10W range.

The PC has officially took a backseat to mobile, in every category. Makes sens considering,

AMD didn't progress at all for many years during the Bulldozer era. And Intel stood still for nearly 5 years because of process. Not just that, prior to those Intel struggled mightily with Netburst. At least with Netburst they had a backup plan but still.

There are lots of excuses why Intel can't put their PCH(or at least minimum amount of I/O so it doesn't need it for mobile) on-die. AMD has done it since Carrizo?

I cannot say in confidence whether the demise of x86 ISA is due to AMD/Intel being unable to execute properly or the x86 ISA is that deficient. Due to the fact the two companies that represent it screwed up so many times and for so many years I have to say its 90/10.

Certainly there is some merit to saying the x86 ISA is worse than the others, but its not as important as other factors, such as economies of scale, and the quality of your leadership and the team.
 
Last edited:
Apr 30, 2020
68
170
76
It's also literally 1.5 years newer than AMD's Zen 2 architecture, which is far more efficient and performant at any given TDP than most of Intel's chips at low TDP. At the low TDP range, Renoir is pretty much just as much of a generational leap as this over Intel's parts in MT loads.

AMD didn't progress during the Bulldozer years because of poor management decisions. When a new architecture turns out bad, you're committing to it for at least 3-5 years until you can replace it with something better. Which is exactly what happened. Intel made little progression during the "Bulldozer years" due to poor management choices, and now they're suffering from huge process problems.

Can x86 be made to be highly efficient? We'll find out. As ARM starts to push "up" out of the ultra-mobile segment, I think you'll find AMD and Intel will start working really hard to push "Down" into the ultra mobile space. We may start to see this if the rumored Samsung mobile SOC with AMD graphics comes to fruition. That massive cache in Navi2 that saves a ton of power may prove extremely useful in an ultra mobile setting....
 
Last edited:
  • Like
Reactions: Tlh97

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
⁰One thing that is clear in this talk is that Apple are positioning the M1 as an iGPU. ie the point is not "we are better than some 350W monster" it is "we are an iGPU that can do stuff that prevously demanded a dGPU".
I noticed the same thing when I had a chance to watch the Apple event directly (not a liveblog), Apple specifically said "M1 has been optimized for our most popular, low-power systems", along with a bunch of other such reminders that this is just step one, that this is us not even trying for maximum performance, only for low power and cost...

Not that they'll be able to even reach anywhere near maximum graphics performance when Apple silicon relies on a crazy amount of JIT compilation from x86-64 binaries to AArch64 binaries ...