• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Discussion Apple Silicon SoC thread

Page 445 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eug

Lifer
M1
5 nm
Unified memory architecture - LP-DDR4
16 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 12 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache
(Apple claims the 4 high-effiency cores alone perform like a dual-core Intel MacBook Air)

8-core iGPU (but there is a 7-core variant, likely with one inactive core)
128 execution units
Up to 24576 concurrent threads
2.6 Teraflops
82 Gigatexels/s
41 gigapixels/s

16-core neural engine
Secure Enclave
USB 4

Products:
$999 ($899 edu) 13" MacBook Air (fanless) - 18 hour video playback battery life
$699 Mac mini (with fan)
$1299 ($1199 edu) 13" MacBook Pro (with fan) - 20 hour video playback battery life

Memory options 8 GB and 16 GB. No 32 GB option (unless you go Intel).

It should be noted that the M1 chip in these three Macs is the same (aside from GPU core number). Basically, Apple is taking the same approach which these chips as they do the iPhones and iPads. Just one SKU (excluding the X variants), which is the same across all iDevices (aside from maybe slight clock speed differences occasionally).

EDIT:

Screen-Shot-2021-10-18-at-1.20.47-PM.jpg

M1 Pro 8-core CPU (6+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 16-core GPU
M1 Max 10-core CPU (8+2), 24-core GPU
M1 Max 10-core CPU (8+2), 32-core GPU

M1 Pro and M1 Max discussion here:


M1 Ultra discussion here:


M2 discussion here:


Second Generation 5 nm
Unified memory architecture - LPDDR5, up to 24 GB and 100 GB/s
20 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 16 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache

10-core iGPU (but there is an 8-core variant)
3.6 Teraflops

16-core neural engine
Secure Enclave
USB 4

Hardware acceleration for 8K h.264, h.264, ProRes

M3 Family discussion here:


M4 Family discussion here:


M5 Family discussion here:

 
Last edited:
But I've concluded that the amateur internet world is simply uninterested in actually thinking about this issue [energy vs power] like an adult; they've being making idiotic statements on the issue since the M2, so I'll bow out now and leave the rest of you to fight about this in blissful ignorance.
I mean you literally started your argument out with:
For example let's suppose (I have no idea if this is true)
Fact of the matter is Apples metrics are a complete black box. For all we know the numbers the report to the user are spoofed anyways to help prevent data extraction attacks.

This could be resolved very easily by using a scheme that measures ENERGY not power. Either a meter that's designed (and designed WELL...) to measure energy, or by driving the mac off a battery with a trustworthy "capacity" measurement.
Watts literally is energy. Even if you put an clamp meter on the wall adapter, you'd be getting the exact same result. The laptops should have a good power factor with the battery and all, but otherwise if a plugged in macbook is sucking x amount of power through the wall, thats real energy being measured.

Kill-a-watt's are calibrated to 2% accuracy. A much bigger discrepancy than what we see from the charts.
Then an Apple internal value reporting essentially a "genuine" average will look very different from a HW meter reporting essentially a "maximum over one second" value.
The way all power metering works is by sampling over time. And the way apple is so strong on power management, I'd find it hard to believe they'd use higher sample rates/faster reporting times, since that would just burn more energy to give similar values. Like the values on a kill-a-watt might only update on the screen every second, but internally the actual voltage and amperage data is sampled many times a second, otherwise the devices wouldn't be able to measure wall frequency, and they'd be much less accurate...
 
Last edited:
I'm not sure Apple is reporting their numbers accurately, but I'm pretty confident they are measuring them accurately at least on their laptops. They've got too much invested in device power management and are too good at it for that to not be the case.
 
New Apple 27” display incoming soon (in a few months?). iOS code confirms a previously leaked J527 display identifier, and indicates it has HDR and ProMotion 120 Hz. It also appears it will use an A19 for the controller SoC.


27” is too small for me these days though, and the 32” 6K Pro Display XDR is way, way too expensive, so as previously mentioned I bought the LG 6K instead.

Anyhow, if it’s 5K 120 Hz, that’s a lot of bandwidth at 57 Gbps, but that should be no problem for current 40 Gbps Thunderbolt Apple Silicon Macs (and iPad Pros) since Apple can just use visually lossless Display Stream Compression (DSC). In fact, DSC is how I get 6K 10-bit 60 Hz with 4:4:4 chroma on my LG. I’m using Thunderbolt 4 and have a USB 4 / TB SSD hanging off the monitor. It appears the monitor is using roughly about 16 Gbps (39.8 / 2.5X DSC = 15.9 Gbps) leaving the rest of the downstream bandwidth for the SSD, minus some overhead.
 
Looking back in that video Jan posted. It looks like Math Mul is really inefficient on M4 Max.

The GPU efficiency should improve with M5 Max as it comes with math mul acceleration. Although Julia has to support it but others like LLAMA.ccp already support it. The CPU is a different story.
 
Looking back in that video Jan posted. It looks like Math Mul is really inefficient on M4 Max.

The GPU efficiency should improve with M5 Max as it comes with math mul acceleration. Although Julia has to support it but others like LLAMA.ccp already support it. The CPU is a different story.
Link?
 
 
I wouldn't call it cheating. Reviewers who don't know any better share misleading comparisons. That's not Apple's fault.
Indeed. That’s why notebookcheck is a good source, measures power draw at the outlet. And maybe use the software to measure proportional values for the sensors?
I find it weird that they wouldn’t offer an M5 Pro model. Seems like quite a mess if they skipped past it.
 
With the compactness manifested in the Mac Mini and Studio, I just don’t get why anyone would still favor an iMac Pro (or any iMac for that matter.) But far be it from me to infringe on others freedom of choice.
 
I find it weird that they wouldn’t offer an M5 Pro model. Seems like quite a mess if they skipped past it.

They have an iMac with M4, if they add an iMac Pro with M5 Max where's the "mess" in skipping past M5 Pro? It probably isn't high volume enough to merit a lot of different configurations so there's a "low" and "high" model, no "middle". That's still twice as many options as there are currently.
 
At some point, the pro has to die. It's just limping along right now to satisfy a niche inside of a niche. My only thought is that, eventually, it becomes some sort of "pro-box" add on for the Studio, where it uses something like an apple custom occulink connection, or even a gang of Thunderbolt 5 (or possibly 6) connections to give the needed bandwidth for what it houses inside.
 

This is a good video comparing mac mini M4 Pro to Spark. Includes wall power measurements as well.

Not only is the Mac mini faster in some cases but also consumes 2x less power at the wall than the Spark.

Prefill should be improved with M5 Pro/max
 
Last edited:
With the compactness manifested in the Mac Mini and Studio, I just don’t get why anyone would still favor an iMac Pro (or any iMac for that matter.) But far be it from me to infringe on others freedom of choice.
iMac pretty handy if you want extremely simple setups. Fewest cables, connectors, etc. Lot fewer things to go wrong and it doesn't take many IT hours to cover any cost benefits of separate monitor/CPU. If you've ever set up lab/kiosk deployments, you learn really quickly where your costs get incurred and every connector carries a pretty high cost.

For the iMac Pro, you kind of have to find those costs within your own time. iMacs are also pretty decent if you have to put your computer in a common space - they look nice and have a VERY small footprint.

Many years ago - when LCD monitors first came out we were doing an office renovation during a period of rapid expansion and part of that was having to shrink people's office space by quite a bit. It was a lot cheaper to buy fairly high end furniture that fit the space well, and fanless computers with flat screens to preserve desk space and minimize noise (these were advisor offices that meet people one on one). They were expensive by the standards of what kind of compute work the staff were doing but were cheap relative to architectural measures we would have needed to take to accommodate traditional ATX case/CRT computers.
 
They have an iMac with M4, if they add an iMac Pro with M5 Max where's the "mess" in skipping past M5 Pro? It probably isn't high volume enough to merit a lot of different configurations so there's a "low" and "high" model, no "middle". That's still twice as many options as there are currently.
Maybe. But it would go against their typical Base/Pro/Max tier structure. The gap in price will be massive between what’s being rumored.
 
Apple kernel debug kit reveals A15 MacBook test machine, and more mature A18 Pro MacBook with MediaTek wireless chipset.

 
Apple kernel debug kit reveals A15 MacBook test machine, and more mature A18 Pro MacBook with MediaTek wireless chipset.


They've obviously been thinking about a low end Macbook for a while. Using a Mediatek chipset would help with the BOM (versus Broadcom) but now that they have their own N1 wireless I have to believe the production version would use it.
 
It’s not cheating if Apple states in their documentation it’s readings should not be compared with different devices.

It’s fine to compare Mac to Mac but not much else.

You probably shouldn't rely on it much, or better said, it has an unknown margin of error which is not helpful if you want to have a good idea, and to know if there are any surprises in the power draw behaviour. It's exactly the surprises that you want to know about, but that's where relying on such model-based estimation is probably less safe.

If it's estimation/model based and not relying on some actual current/voltage "intake" sensing, there could be nontrivial differences in how close to reality the numbers are for different cores and different SoCs, even if you limit it to Apple devices only.

(Edit: I mean, even telemetry that presents data from on-chip sensors actually measuring currents/voltages isn't always safe, there are always chances that they can be off or that the software incorrectly reports what the raw values mean. So you can't blindly trust even such data with certainty.)
 
Last edited:
Back
Top