Discussion Apple Silicon SoC thread

Page 163 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eug

Lifer
Mar 11, 2000
23,587
1,001
126
M1
5 nm
Unified memory architecture - LP-DDR4
16 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 12 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache
(Apple claims the 4 high-effiency cores alone perform like a dual-core Intel MacBook Air)

8-core iGPU (but there is a 7-core variant, likely with one inactive core)
128 execution units
Up to 24576 concurrent threads
2.6 Teraflops
82 Gigatexels/s
41 gigapixels/s

16-core neural engine
Secure Enclave
USB 4

Products:
$999 ($899 edu) 13" MacBook Air (fanless) - 18 hour video playback battery life
$699 Mac mini (with fan)
$1299 ($1199 edu) 13" MacBook Pro (with fan) - 20 hour video playback battery life

Memory options 8 GB and 16 GB. No 32 GB option (unless you go Intel).

It should be noted that the M1 chip in these three Macs is the same (aside from GPU core number). Basically, Apple is taking the same approach which these chips as they do the iPhones and iPads. Just one SKU (excluding the X variants), which is the same across all iDevices (aside from maybe slight clock speed differences occasionally).

EDIT:

Screen-Shot-2021-10-18-at-1.20.47-PM.jpg

M1 Pro 8-core CPU (6+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 16-core GPU
M1 Max 10-core CPU (8+2), 24-core GPU
M1 Max 10-core CPU (8+2), 32-core GPU

M1 Pro and M1 Max discussion here:


M1 Ultra discussion here:


M2 discussion here:


Second Generation 5 nm
Unified memory architecture - LPDDR5, up to 24 GB and 100 GB/s
20 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 16 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache

10-core iGPU (but there is an 8-core variant)
3.6 Teraflops

16-core neural engine
Secure Enclave
USB 4

Hardware acceleration for 8K h.264, h.264, ProRes

M3 Family discussion here:

 
Last edited:
Mar 11, 2004
23,077
5,559
146
Are people seriously letting this devolve into this same old argument? Further, I cannot believe that in almost 2023 the crux of the argument is based around benchmarks which have long been pointless endeavors, let alone when comparing different platforms that have much stronger other reasons for not comparing in such a manner. The Apple chips are impressive but as long as they're locked to Apple's walled garden it is largely pointless to compare them as though its 1:1.

I think for most people that aren't gamers, and especially for average users that just browse the web and do other light tasks, Apple's products likely are superior for their use (largely because of the battery life). Although the comparisons often fail. As if that's all they want, then cheap Chromebooks would likely be good enough for them as well and even gaming laptops are a bit rough, sacrificing a lot of performance for portability (and often being very expensive). And I say that a lot of people that don't like Apple are often years (if not decades at this point) out of date in their complaints, but it still exists. I like Apple's stuff, but I don't love any of it. But then I don't love any company's products as there's always something I feel is lacking.

Something else, Apple doesn't offer equal products. If Apple would make a 2 n 1 that combines the iPad Pro with the Macbook chassis, basically a Surface Book where the tablet is an iPad Pro, I'd be very likely to buy it, even at a premium. But they don't, because they can get people to pay that premium for simpler devices. Instead I bought a ROG Flow 13, which had better gaming capability than Macbooks, had pen input, is still quite svelte. I sacrificed battery life and some usability (I also have a Galaxy tablet and its better for drawing most of the time due to size). I paid less than I would have paid for either a Macbook or an iPad Pro with similar performance or specs at the time. The Surface Book shows this is absolutely possible and can even be done well (and it would be even better if the tablet portion was iPad Pro where as just a tablet it works even better), at a premium price.
 

MadRat

Lifer
Oct 14, 1999
11,910
238
106
I applaud Apple for their products. I just think people shouldn't compare them to general processors. A part of me wants to kick Mr. Cook in the jewels, for what he's doings in China, but that's nothing to do with the products. And at times I think Apple hurts their consumers, like the iMac/iMac Pro lines, by insisting on not having touch screen interface on $5,000 workstation or a $1,650 All in one. Literally for those prices you didn't even get the mouse with it. Just makes no sense.
 
Last edited:

poke01

Senior member
Mar 8, 2022
740
721
106
Yeah because that's the end-all, be-all of computing benchmarks.

Meanwhile:


MacMini M1 getting spanked by a 5700G in an environment where software wasn't optimized for all the fun bits in the M1.
Well the M1 should be beat no? Apple does not even support Linux on M1, its a hackjob put together by random talanted people.

Thats like playing a PS5 optimised game on Xbox series X and saying why is performing like poop...

oh and the review says "Alpha state" but whatever.
 
  • Like
Reactions: Lodix

DrMrLordX

Lifer
Apr 27, 2000
21,637
10,855
136
Well the M1 should be beat no? Apple does not even support Linux on M1, its a hackjob put together by random talanted people.

It's a standard ARM build of Linux. It's the closest we've gotten so far to getting standard ARM binaries running on Apple hardware. If you are interested in the relative performance of Asahi Linux to MacOS then Phoronix has your hookup:


Furthermore, there have been updates that improve performance a bit for newer builds of Asahi Linux:


oh please AMD themselves show Geekbench in their keynotes for Single pref.

And? It's expected, even if it's awful.
 
  • Like
Reactions: moinmoin

mikegg

Golden Member
Jan 30, 2010
1,756
411
136
and OS introduces fixed-function hardware as part of the SoC package to accelerate Y common workload with fairly common repeating algorithms/functions that can easily be modeled in an ASIC.
Like what?

The M series CPUs are just general-purpose chips using ARM ISA. Any software that compiles for ARM can utilize M1/M2's ARM based instruction sets.

I think you're referring to the SoC's neural engine. As far as I know, Geekbench and 99.99% of software do not make use of the neural engine.

If I'm wrong, let me know what fixed function hardware the M series uses to beat AMD and Intel CPUs.
 
Last edited:

mikegg

Golden Member
Jan 30, 2010
1,756
411
136
Yeah because that's the end-all, be-all of computing benchmarks.

Meanwhile:


MacMini M1 getting spanked by a 5700G in an environment where software wasn't optimized for all the fun bits in the M1.
I think you're misguided.

First, there's a difference between macOS being optimized for Apple Silicon to make Apple Silicon look better than it is (your claim) and Linux software not being optimized for Apple Silicon.

Phoronix runs a suite of software that have been optimized for x86 instructions over many years, some of them decades of optimization. Just because the software is able to run on Asahi Linux doesn't mean it's optimized for ARM CPUs.

Take for example, Cinebench uses Intel's Embree engine which is hand-optimized for AVX x86 chips. Recently, an Apple engineer added support for ARM NEON instruction set that boosted the performance by 4-12% for M1. Cinebench has not integrated this patch.

And guess what? The above patch still requires a translation layer between AVX and NEON so it still does not fully make use of Apple Silicon.

These are the kind of optimizations that will take many years to arrive for Apple Silicon.

tldr; Major difference betwen Apple optimizing macOS for Apple Silicon and the software suite that Phoronix chose to test Apple Silicon.
 

mikegg

Golden Member
Jan 30, 2010
1,756
411
136
I'd like to add that the CPU in the M series take up very little transistors compared to other parts, as little as 15% of the entire SoC. Apple optimizes its SoCs for maximum effiency - not for pure CPU performance.

Take for example, Apple's display controllers take up a massive part of the entire SoC, bigger than the CPU itself. This is why a Macbook Air can be fanless while connected to an external monitor while an Intel laptop will spin up its fans like crazy.

The fact that the M1 Pro/Max trades blows with a CPU like the Ryzen 5950x, which dedicates all of its transistors to the CPU and uses 4-5x more power is insane.


 

mikegg

Golden Member
Jan 30, 2010
1,756
411
136
If I wanna run Geekbench for a living, I'll know what to buy.
AMD redactedrun Cinebench for a living though.

And Cinebench is a terrible general purpose CPU benchmark compared to Geekbench. Ex Anandtech, Andrei F, agreed with me:

AnandTech does not disagree.

I heavily favour Geekbench over Cinebench and very much agree with what's being said by OP.

Cinebench absolutely isn't a computational throughput workload.
It's defined by extremely long dependency chains, bottlenecked by caches and partly memory. This is why you get a huge SMT yield from it and why it scales very highly if you throw lots of "weak" cores at it, for example see the M1 4C/8C score scaling.










use of the word fanboy is still not allowed.


esquared
Anandtech Forum Director
 
Last edited by a moderator:

gdansk

Platinum Member
Feb 8, 2011
2,123
2,626
136
AMD <redacted> run Cinebench for a living though.
Seems you're a bit out of date, Cinebench is Intel's domain now. ;)
AMD is good at embarrassingly parallel MT benchmarks like compression and rendering. But now, because Intel has even more cores, they're even better at Cinebench.

And at least there are some people who spend all day rendering (though usually Blender). There isn't anyone who spends all day geekbenching.
 
Last edited by a moderator:

mikegg

Golden Member
Jan 30, 2010
1,756
411
136
Seems you're a bit out of date, Cinebench is Intel's domain now. ;)
AMD is good at embarrassingly parallel MT benchmarks like compression and rendering. But now, because Intel has even more cores, they're even better at Cinebench.

And at least there are some people who spend all day rendering (though usually Blender). There isn't anyone who spends all day geekbenching.
Oh my bad. I guess Intel beat AMD in its own benchmark.

There are probably more AMD chip users who run Cinebench all day than Apple users who run Geekbench all day.

PS. Geekbench's CPU benchmark also contains rendering: https://www.geekbench.com/doc/geekbench5-cpu-workloads.pdf
 

Henry swagger

Senior member
Feb 9, 2022
372
239
86
Last edited by a moderator:

gdansk

Platinum Member
Feb 8, 2011
2,123
2,626
136
Oh my bad. I guess Intel beat AMD in its own benchmark.

There are probably more AMD chip users who run Cinebench all day than Apple users who run Geekbench all day.

PS. Geekbench's CPU benchmark also contains rendering: https://www.geekbench.com/doc/geekbench5-cpu-workloads.pdf
In the end Cinebench is a good benchmark for rendering. It generally corresponds pretty well to other rendering workloads like Blender. Unfortunately, Geekbench doesn't really have any good correspondence to any workload. It is a composite score that sabotages itself by including joke components (like AES XTS which has almost 0 correlation to general performance). To get good results from Geekbench you have to look at components scores but usually people don't bother. Even geometric mean cannot make the composite score useful, they'll have to make GB6 soon.

And mind you, I don't think this would hurt M1/M2 scores in 1T (it would in MT). But as a reminder for people defending Geekbench that Geekbench 5 is presently quite flawed. And there's been numerous discussions with the creators about removing AES XTS to which they made little defence other than keeping GB5 scores comparable to each other.

Sent with snark from my M1 Pro.
 
Last edited:

MadRat

Lifer
Oct 14, 1999
11,910
238
106
My comments were not meant to gaslight anyone. I simply point out Apple picked their OS to be optimized to their hardware, and vice versa. Neither Intel nor AMD enjoy that support from an OS. Its not a bad thing by any means. Add in they are on smaller processes than Intel or AMD, it adds to the performance gap. But even Apple struggles on Windows when run in emulation, smaller relative process or not. So Apple's performance depends on their OS-hardware symbiotic relationship.
 

mikegg

Golden Member
Jan 30, 2010
1,756
411
136
In the end Cinebench is a good benchmark for rendering. It generally corresponds pretty well to other rendering workloads like Blender. Unfortunately, Geekbench doesn't really have any good correspondence to any workload. It is a composite score that sabotages itself by including joke components (like AES XTS which has almost 0 correlation to general performance). To get good results from Geekbench you have to look at components scores but usually people don't bother. Even geometric mean cannot make the composite score useful, they'll have to make GB6 soon.
Geekbench is best free general purpose benchmark we have for CPUs.

The team at Nuvia proved it: https://medium.com/silicon-reimagin...way-part-2-geekbench-versus-spec-4ddac45dcf03

Anandtech's Andrei F agrees with me as well. He went on to work for Nuvia at Qualcomm.

Cinebench is a benchmark for Cinema4D, which is a niche software in a niche. 99.99% of the people buying these CPUs won't use them for CPU rendering. At least Geekbench gives us an idea of how fast we can expect most applications to be.

Again, Cinebench is a terrible CPU benchmark if you want to know how fast a CPU is. It's a great benchmark if you use Cinema4D though.
 
Last edited:
  • Like
Reactions: Viknet

mikegg

Golden Member
Jan 30, 2010
1,756
411
136
Stop apple chips and laptops are editing toys that are not relevant beside in apple world.. you get more value from a cheap pc oem
That's funny. I work in Silicon Valley - you know, the place where most of your software is built from and the vast majority of your chips?

Here, almost all software is developed with Apple computers. It's hard to find a Windows user in software development in Silicon Valley.

How much more real world do we need?
 
  • Like
Reactions: scineram

gdansk

Platinum Member
Feb 8, 2011
2,123
2,626
136
Geekbench is best free general purpose benchmark we have for CPUs.

The team at Nuvia proved it: https://medium.com/silicon-reimagin...way-part-2-geekbench-versus-spec-4ddac45dcf03

Anandtech's Andrei F agrees with me as well. He went on to work for Nuvia at Qualcomm.

Cinebench is a benchmark for Cinema4D, which is a niche software in a niche. 99.99% of the people buying these CPUs won't use them for CPU rendering. At least Geekbench gives us an idea of how fast we can expect most applications to be.

Again, Cinebench is a terrible CPU benchmark if you want to know how fast a CPU is. It's a great benchmark if you use Cinema4D though.
Geekbench is "better" in that it includes more workloads. So in theory it is more representative. But using GB composite score is flawed because certain tests end up being tests of memory bandwidth in MT and clock speed of the AES accelerators in 1T. CB remains a good indicator for rendering performance provided you run it long enough. In that sense CB is "less flawed" than comparing GB5 composite scores.

Mindlessly defending GB5 composite scores doesn't really help make your case. That article uses the GB5 integer score -- which thankfully excludes AES XTS -- not the entire composite score. And includes such quotes as
Geekbench is generally less demanding of the micro-architecture than SPEC CPU is

In general M1/M2 should be even better compared with integer/float scores than the general composite score. And it doesn't make Apple chips look worse.
 
Last edited:
  • Like
Reactions: Nothingness

DrMrLordX

Lifer
Apr 27, 2000
21,637
10,855
136
Like what?

Hardware H.265 acceleration, for example. The neural engine is another bit, yes, though at least for now, it seems like CPU/SoC reviewers point that out before benchmarks to differentiate between inference performance and "general CPU" performance.

AMD redactedrun Cinebench for a living though.

You wouldn't want someone using that as the end-all, be-all of benchmarks, would you? No? Good. I wouldn't either. And frankly I don't care if Andrei agrees with you or not. Other people whom I respect will still die at the feet of SPEC when SPEC has its own problems. That's why you run a suite of as many applications as you can to gauge performance, keeping in mind what they're actually doing and why they're performing the way they do.
 

gdansk

Platinum Member
Feb 8, 2011
2,123
2,626
136
Classic ATF tomfoolery over benchmarks. Been going on for a couple decades (with different benchmarks in play).
It has gotten really out of hand in this thread - people must be bored.
Oh I'm not gonna stop until GB totally removes the crypto score. And at least I've been pretty consistent in Zen 4, Apple and Intel threads about its complete irrelevancy to general performance. All modern AES instructions correlate more with your memory than general performance. Useless to include anymore.
 
Last edited: