• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Question Apple Silicon M1 series thread, including M1 Pro, M1 Max - Geekbench 5 single-core >1700

Eug

Lifer
Mar 11, 2000
23,064
575
126
M1
5 nm
Unified memory architecture - LP-DDR4
16 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 12 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache
(Apple claims the 4 high-effiency cores alone perform like a dual-core Intel MacBook Air)

8-core iGPU (but there is a 7-core variant, likely with one inactive core)
128 execution units
Up to 24576 concurrent threads
2.6 Teraflops
82 Gigatexels/s
41 gigapixels/s

16-core neural engine
Secure Enclave
USB 4

Products:
$999 ($899 edu) 13" MacBook Air (fanless) - 18 hour video playback battery life
$699 Mac mini (with fan)
$1299 ($1199 edu) 13" MacBook Pro (with fan) - 20 hour video playback battery life

Memory options 8 GB and 16 GB. No 32 GB option (unless you go Intel).

It should be noted that the M1 chip in these three Macs is the same (aside from GPU core number). Basically, Apple is taking the same approach which these chips as they do the iPhones and iPads. Just one SKU (excluding the X variants), which is the same across all iDevices (aside from maybe slight clock speed differences occasionally).

EDIT:

Screen-Shot-2021-10-18-at-1.20.47-PM.jpg

M1 Pro 8-core CPU (6+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 16-core GPU
M1 Max 10-core CPU (8+2), 24-core GPU
M1 Max 10-core CPU (8+2), 32-core GPU

M1 Pro and M1 Max discussion here:

 
Last edited:

amrnuke

Golden Member
Apr 24, 2019
1,165
1,730
106
Quotes from your last three posts:

1:
I have told you before.
Do you want to be part of the (not very) in crowd of Phoronix and Larabel? Or do you want to learn something? Because Phoronix is not a place to learn the future of technology; it's a place for a bunch of technology has-beens to get together to reminisce about their glory days, when x86 was all that mattered and people were impressed that you knew how to write a web page.
Patronizing and insulting instead of talking about the technology.

2:
Well duh.
More patronizing.

3:
Based on what? What EXACTLY do you imagine is the limiting constraint in converting a 4 core SoC into an 8 core SoC?

I honestly do not understand 90% of what goes on in this thread. People claim to know and understand technology, but their "understanding" appears to be of cargo-cult nature.
That insanity about vectors in Affinity Photo to me epitomizes everything about this forum. Never in my life would I have imagined that someone who actually claims to understand these things would confuse vector graphics with the utilization of an on-core vector unit. But this is what we are seeing constantly -- people who don't have a clue about the difference between large and small cores giving us opinions about scaling. People who call storage "memory" giving us opinions about DRAM performance.
Unbelievable.
Again, patronizing, no discussion. Just attacking people and insulting them instead of discussing and teaching.



Here's the thing. The rest of us are discussing the M1 as a technology; you're more interested in your political opinions about Apple, and that slants the way you view M1. Whatever.
But try to get it through your head that the rest of us are interested in the technology. Every time you make a statement about technology that is flat out crazy -- but serves to further your political agenda -- you are losing credibility among the technologists.
Perhaps you would be happier finding a group of similar-minded individuals who prioritize political opinions over truth?
It seems like you're the one not all that interested in talking about the technology. Let's get this straight first: you're the one who called Apple "smarter" than AMD and have failed to back-up and discuss why. That's not a factual statement, it's a "political" one (though I think you fail to understand the definition of political...). In response to me, you threw out some statement about better branch predictors, etc., and then went off and cited features of the Arm ISA that have nothing to do with whether Apple are smart or not. You can't just say "because their branch predictors are better" and just have us all believe it. Can you explain why? Give references to why Apple's branch prediction on their uarch and Arm's ISA is better than AMD's neural branch prediction or TAGE branch prediction in their uarch on x86?

Again, I'm not the one who brought up these derailing statements about one side being better than the other, you are. I was just responding to it. I've talked plenty about the tech and how it might translate. So we can get back to that. I'm open to listening to what knowledge you have, but when you continue to just attack people (from me to other posters to other sites to entire groups of users to companies/computer engineers) instead of actually discussing the tech, you're not doing anyone, or the discussion, any service.

What statements have I made that are flat-out crazy regarding the technology? I've talked about A12's F/V curve and what that might mean for A14/M1 and I've talked plenty about the marketplace. But if there's something about the technology that I've been flat-out crazy on, please respond to that instead of ad hominem attacks.
 
Last edited:

guidryp

Golden Member
Apr 3, 2006
1,398
1,525
136
Just a reminder. Apple is roughly 7% market-share of the PC Market (roughly 265 to 270 million pc computers a year.)

View attachment 34774

Note these numbers are constantly in flux, and are partly flawed for they are merely "estimates" from people that are doing survey data and thus they are not counting exactly but instead doing the inexact science of polling and prediction. I bring up the flux part for IDC Q32020 numbers think Apple's Market-share is 8.5% during that quarter, yet I provided the image link above which only has 6.6% for a year since that is a yearly number and not a quarter number. (For comparison the Q32019 numbers were 7.0%)

Getting to 10% should be easy (double digit). That is a 50% increase from 6.6% (or 17% increase from 8.5%) but apple dominates in the higher price points, and the price point for the "total industry" is hovering around $630 per computer but that is including the cheapest atom / celeron devices and the most expensive gaming laptops and ultrabooks in a single ASP number.
Alternately, Macs seem to represent ~15% of active laptops/desktops connecting to the internet:

10 years ago it was only about 5%, so significant growth.

There is also significant regional split.
Lower in Asia, Africa, South America, and India. (near 10%)
Higher in Oceana, and North America. (near 30%).
Europe in between. (near 20%)
 

lobz

Golden Member
Feb 10, 2017
1,812
2,342
136
Explain.

All speedometer does is measure the completion times of a todo app in React, Ember, Preact, jQuery, etc. These are the most commonly used frameworks on the web and it's a good indicator of web performance.

FYI, my iPhone XR browses websites considerably faster than my Intel Mac. Speedometer is also clearly faster on my XR than Intel Mac. The benchmark matches my real-life experience.
OK first of all, you showed me again how your downvote was nothing but a childish display of frustration, because you've (probably unintentionally) just explained to me why and how canned synthetic benchmarks shouldn't be used as trouncing cards when you must say something is 'tEh most fastestest'. Mock Cinebench on its everyday relevance all you want, but it's an actually good indicator of how a system would affect the time you needed to do your job in that particular type of workload. Then you try to tackle the situation with this absurd subjective experience statement, but all I can say is this: if that's true, be a true altruist and don't give away your Intel Mac for Christmas, just throw it into the trash. I'm not sure what you've done with that, but I've never seen a properly put together PC browsing the internet slower than any phone. At the very best case for any phone you shouldn't notice any difference, since web browsing is 100% instant on any good PC. Sorry mate but you're just being freaking ridiculous.

Also if you truly perceive your phone to be considerably faster than browsing on an actually OK desktop system, I must quote House M.D. and say, 'I have no knowledge of alien physiology', so it's not a debate I can participate in.
 
Last edited:

coercitiv

Diamond Member
Jan 24, 2014
4,545
6,257
136
A lot of AMD fanboys in denial here.
The irony of it all is literally everybody on this forum agreed M1 is a revolutionary piece of silicon.

But that wasn't enough, because having everyone agree means somebody lacks the means to prove their superiority on the Internets. So you push and push until claims inflate, mutate and become ridiculously false. And when they start reacting you can finally shout to release the pressure: FANNNBOYYYS! They can't accept simple facts! Prove me wrong!

And so instead of having a nice thread to follow performance updates on M1 - which is exactly what the OP and other Apple consumers from Anandtech are interested in, we're forced yet again to filter through regurgitated emotional reactions from a few "alpha" posters who need their daily dosage of drama on the forums.
 

Mopetar

Diamond Member
Jan 31, 2011
6,168
3,004
136
Arguing over whether Cinebench is any good is just as pointless about arguments about Geekbench or SPEC. It's just another benchmark and should be treated only as indicative of what it measures and the kind of workloads that might be similar. It's not as though Cinebench is was only thing that Ryzen did well on, but it did a pretty good job of showcasing what kind of workloads the chip handled well so it's no surprise that it gets used as a stand-in for all of those other things. If it seems more prominent it's likely for that reason and not because it really is any more prominent.
 

insertcarehere

Senior member
Jan 17, 2013
409
279
136
Seems to me like you don't know what you're talking about.

See this video, go to 8:00 to see what I'm talking about. He can't even scrub through the video because it's out of memory. I can do that way smoother than him on my 7700HQ with 32GB. *WAY* smoother.

Edit: From the video "Checking CPU, CPU was fine this entire time, but the memory RAM is really suffering.."
That would matter if 16GB M1 MBAs don't exist, but they do and according to people who do this for a living, the RAM seems to hold up just fine doing video editing. It's not like Windows Ultrabooks with 32gb RAM can be had for reasonable prices anyway.
 

dmens

Platinum Member
Mar 18, 2005
2,237
819
136
What is there to argue. Go look at a die shot of a modern x86 CPU and figure out the size of the decoding front-end, then you know the statement that transistors being so plentiful that uop translation is free is absurd. It isn't even just the area cost either, for example, variable length decoding creates design limitations that are very difficult to overcome. Go ask any CPU designer who has had to deal with x86, they will all say the same thing.
 

amrnuke

Golden Member
Apr 24, 2019
1,165
1,730
106
It's not going to happen overnight. Some places will never switch.

And I've already said in this forum that it wouldn't surprise me if they actually got to 50% in 5 years, but I expect it to happen within 10 years.

By the way, many enterprises will switch. They just couldn't justify $1200 Macbooks for their $30,000/year employees. But what if they can get $600 Macbook SEs at volume pricing?

Macs at the enterprise level costs less to maintain and has better satisfaction levels:

"In fact, IBM found they saved between $273 - $543 per Mac they deployed compared to PCs."
Now we're getting $600 Macbooks. Wow! What a shift in Apple's vision! That's cheaper than the iPhone mini!

As for the JAMF study, as someone who has been doing actual research for the better part of 15+ years, not controlling for the variables in that study is absurd.

Let me explain why the study has to be taken with a grain of salt: People who choose Mac (as they did in the Mac@IBM program) are far different from those who have no preference or who declined to be issued a Mac. What they should have done is issued Mac or PC randomly to a subset of employees and then measured the differences. All the other results are essentially useless.

For example: I have an iPhone at home and bought it because I like it more than Androids. If my employer gave me a choice between a Galaxy and an iPhone and I picked an iPhone, would it be surprising if I told the surveyors that I like my own choice of phone, or that it made me more productive? I also am far less likely to call IT than someone with a Droid at home, who got issued an iPhone. Both subsets are far less likely to call tech support than someone who uses only a landline at home.

Of course someone who chooses Mac is going to require less tech support than someone who takes whatever is given to them - because the person choosing a Mac is more likely to have had prior experience with it, and probably good experience, hence their choice. And when you ask someone if they liked their choice, most people aren't going to say, "Nah, I made a bad choice."

He said it poorly at the end: "I don't know if better employees want Macs, or if giving Macs to employees makes them better. You gotta be careful about cause and effect."

He left out the most important piece -- We didn't construct this study in a manner that can properly draw any conclusions as to why people who selected Macs tended to perform better. Could it be because of halo effects? Because "creatives" tend to gravitate toward the "creative"-oriented Mac? Because people who have technology preferences tend to be a different type of employee? No, we'll just give an either-or of two improbable options: 1) if you don't want a Mac we know you're not as good an employee, or 2) a Mac is a panacea for all enterprise woes.

BTW, they must not be terribly convinced of the results he presented. Even in one of the most forward-thinking companies with all the infrastructure and experience and high level intellect and adaptability you could want, with the first deployment of Macs starting in 2015 (hey, 5 years ago!), they still use 60% Windows and 10% Linux and only 30% Macs.

Apple is not getting 50% marketshare in 2025 or in 2030.
 

guidryp

Golden Member
Apr 3, 2006
1,398
1,525
136
https://www.bloomberg.com/news/articles/2020-12-07/apple-preps-next-mac-chips-with-aim-to-outclass-highest-end-pcs
Up to 16 performance CPU cores for iMac and MacBook Pro.

Up to 32 performance CPU cores for Mac Pro.

Up to 32 GPU cores for iMac.

Up to 128 GPU cores for Mac Pro.

However, not guaranteed all of these will make it to market immediately if ever. eg. May start with 8-12 performance CPU cores for MacBook Pro and iMac.

Also working on Mac Pro mini. o_O
Need huge grains of Salt with that rumor. They also cover all the options so aspects will be true no matter what.

They say the next iMac Chips will have either 8, 12, or 16 high performance CPU cores. Way to cover all the bases...

For the Mac Pro. The only way they get to 128 GPU cores, is with multiple discrete chips. A 64 Core part, would be similar sized to NVidia/AMD largest parts, so possible. But I would expect they would go for something like 32 core part, and use 1-4 of them if they really aim to achieve a 128 core top end part, that would be 1 tape out they could share in multiple configs for multiple models.

Where does it say anything about Mac Pro Mini?
 
Last edited:

name99

Senior member
Sep 11, 2010
399
295
136
@Eug
Can you name a few usecases from your work where ipad would be the preferable form?
I am not imaginative enough to think of any by myself. My work is mostly with CAD and excel, and multitasking. I can't possibly imagine a workflow where a tablet form factor could be anywhere near. By tablet, I mean iOS.

As long as you to use an app + email + chat app + browser, multitasking and switching windows is so incredibly slower. Not to mention managing files, possibly from network locations.

I just can't imagine any task to be more efficiently done without a pointer and a file manager and robust multitasking.
If your "work" consists of reading a large number of technical PDFs, as does mine, the iPad Pro is vastly superior. Reading for hours on my iPad Pro is a joy, better than a book. Reading anything long on a laptop or my iMac is a PITA.

If your work involves tactile manipulation of material (think graphic design, a lot of music, even a lot of video editing) the immediacy of the iPad Pro screen seems to be preferable to many people over the one-step-removed of a track pad.

But I wouldn't do any professional writing on an iPad, not the writing I do that requires a fully-featured keyboard and frequent referencing between windows.


The thing I keep trying to stress is that you will not understand Apple (and where computing is headed) if you keep asking this question of "which is better, PC or tablet". That's like asking which is better, a hammer or a screwdriver. They are both tools, and you use the appropriate tool for a given task. You can go through life hammering every screw you see, but life is easier if you use a screwdriver.

(However I suspect you are not completely familiar and comfortable with the iPadOS slideover support. It's an adequate solution for many lightweight multi-task interactions, like music and chat. The heavyweight window support is, yeah, something that needs a totally rethought UI.
My PDF reader app supports is own multiple tabs and split screen (implemented before Apple, so uses its own code and UI, and I find that a good match to the "PDF reading" task.)
 
  • Like
Reactions: Tlh97 and scannall

guidryp

Golden Member
Apr 3, 2006
1,398
1,525
136
Speaking of numbers of cores, how far up do you think they'll scale?

It's a lock they'll create 12 core chips for MacBook Pros and iMacs, but presumably that will consist of 8 performance cores and 4 efficiency cores. I'm also thinking they'll have a Mac Pro chip with 12 performance cores, but would it make sense to remove the efficiency cores? And what about beyond that? Would it make sense to create a dual-CPU Mac Pro with 2 x 12 performance cores, for a total of 24 performance cores and no efficiency cores?
I agree. Midrange will likely be with 8 Performance and 4 Efficiency cores, since it still has to go in Laptops, 2021 will likely bring the new midrange chip that fills in higher end Macbooks and regular iMacs. Leaving just Mac Pro and iMac pro at the high end.

High end is VERY murky.

Multiple options:

Massive monolith with 16+ performance cores.

Some kind of Chiplet design. So the can Scale core count like AMD does.

Meshing multiple full function SoC in some novel way to make use of combined CPU/GPU/AI cores.

Main SoC used with optional GPU and CPU chiplets to improve peformance as needed.

There are so many ways the could go with the high end, it's really hard to figure where they go, and this will probably be the last part to arrive, probably in 2022.
 

amrnuke

Golden Member
Apr 24, 2019
1,165
1,730
106
A Zen 3 core at peak spends 19W-20W. A Firestorm core at peak spends 5W. Ignoring the I/O die, apple cores are significantly more efficient at peak performance.
By the time Zen 4 comes out, it will be competing against M2.
That's disingenuous. The M1 is designed for low power usage and long battery life. The 5950X (with the 20.6W single core power usage) is a 105W TDP chip. Expecting its single core power usage to be lower than the M1 is absurd, they're designed for totally different things. (Also, interesting you didn't pick the 5600X, which uses 11W at peak ST usage - do you have an agenda?). We have no good information on how Zen 3 cores scale down beyond the 5600X core which is still very much performance oriented rather than efficiency oriented, and it very well could be the case that AMD beats, is roughly equal to, or loses to Apple at that power threshold. We just don't know yet.

The M1 is a fantastically powerful and efficient chip. It doesn't need to stand up to Zen 3 anyway. M1 is part of an ecosystem. When people buy an MBA they aren't buying the M1, they're buying Apple.

AVX-512 doesnt come close to dedicated accelerators for ML. Its like asking a CPU to run Graphics... why? Sure you can run it but GPUs do it better. You can run ML code on CPU but dedicated accelerators run it better. In the end of the day, no one cares which part it runs. If its faster, its faster.
I agree. Technically, we could try to exclude all the accelerators, but that's silly. If you can design a good accelerator that makes performance for real-world tasks better, for less power usage, I see that as an absolute win for all.

Renoir runs much higher than 15W. 15W is just the advertised TDP, which means little to nothing as to how much the chip actually spends. Unfortunately, renoir laptops are also quite rare and finding information on how much power they use is a nightmare, but I assure you when its running multicore benchmarks it doesnt run at 15W to achieve that performance.
Renoir laptops aren't all that rare. I've seen them available for purchase at Best Buy, Costco, as well as online at Acer.com, HP.com, Newegg, etc.

As for power usage, yes, they use more power. The HP ProBook for instance with the 4500U uses 28W on average under load, and 48W as the absolute peak power usage (including the screen on max brightness while running Furmark and Prime95 at the same time). I'm not sure what Notebookcheck's test suite comprised of, but the Mac mini average MT workload usage was 26.5W and peak was 31W (without a screen). Keep in mind the Zen2 core is 1.5 years old, the GPU is going on 3 years old, the laptop in this comparison has a screen drawing power, we don't know if the test suites are equal w/r/t total power demand, etc. It's apples to oranges. But I'd say the Furmark + Prime95 test is pretty heavy duty.
 
Last edited:

name99

Senior member
Sep 11, 2010
399
295
136
A Zen 3 core at peak spends 19W-20W. A Firestorm core at peak spends 5W. Ignoring the I/O die, apple cores are significantly more efficient at peak performance.
By the time Zen 4 comes out, it will be competing against M2.
Or M3...
Zen to Zen2 was ~2.5 years.
Zen2 to Zen3 was ~1.5 years

There MIGHT be a Zen3+ competing against the A15, but I'd expect for most of its lifetime, except perhaps for a few months, Zen4 will be competing against A16.
 

amrnuke

Golden Member
Apr 24, 2019
1,165
1,730
106
I'm not sure to follow. There may a be a tradeoff between pure performance and efficiency, and you would have a point of the M1 was less powerful than the Zen 3 core. But it's not.
And do Ryzen laptop CPUs use a different design compared to desktop? I'm not aware of that. They just use lower frequency, perhaps less cache, binning. But the core design isn't radically different, is it?
Depending on the benchmark, the Zen 3 core can be more powerful - largely they trade blows (Andrei still hasn't worked through the hmmer bug that's inflating the M1's score).

Yes, the laptop APUs absolutely use a different design. Look up Renoir die shot vs Matisse die shot. They don't just use lower frequency. However the core is grossly the same. That being said, packaging can make a big difference in power consumption. Putting IO components on 7nm monolithic APU rather than using a separate 12nm or 14nm IOD saves power and can provide performance gains. Ask Apple how much it helps having all components on the most advanced and power efficient node!

Zen 3 laptop CPUs should be 20-30% more efficient than their predecessors, just like their desktop brethren. This won't be enough to match the M1.

Also, the M1 core is many times more power efficient than intel's TGL, which is a laptop part (the A14 uses 5W vs 20W for TGL to reach similar SPEC scores.)
We don't know how Zen3 core scales down. We also don't know if it will or won't be enough to match the M1, but I can say that I doubt it will be.
 
  • Like
Reactions: Zepp and Tlh97

Doug S

Senior member
Feb 8, 2020
768
1,085
96
To put that in to perspective: if M1 was implemented on N7, it would either be a much larger chip, physically, reducing the number of die per wafer greatly. This would adversely affect the per chip profit, and make the die plus ram package larger and more expensive. Alternately, they could have maintained the same die size, and per wafer chips, but it would have likely cost them two of the Firestorm cores and at least some of the L2 and SLC. Power draw in single core scenarios would be notably higher, and multi threaded benches would be much slower.

The A12X/A12Z is a 4+4 design just like the M1, so they clearly wouldn't have had to compromise on the number of Firestorm cores if it was built on N7, just on the L2/SLC sizing. From the A12 generation to the A14 generation the biggest transistor increase was in the NPU, which doubled in size from 8 to 16 cores, they would have stuck with 8.
 

shady28

Platinum Member
Apr 11, 2004
2,381
199
106
Ok, I think I misunderstood the post I replied to previously, thanks.

As far as M1 comparison go, it would be interesting to find out what the actual sustained power limit in the MBP and MBA are. Andrei showed that the M1 can use up to 21 W on the CPU and 30+W full SOC when both CPU and GPU are fully loaded. I wonder how much MBP and MBA could actually sustain though, even just with a full CPU load. I think comparing MBP to 45W x86 CPUs is fine as there is probably a good cross section of people who would be fine with the higher power consumption and its consequences if the 45W solution provided a significant performance advantage for their applications. Maybe not so much the MBA but it also uses the same chip but will throttle more given serious multi-core workloads.
I think they get lost in the weeds of their analysis. This just makes for a bunch of discussion on things that are irrelevant to actual buyers. I mean really, someone actually buying a laptop should probably look at (not in this order, in their order of preference) :

1 - Performance
2 - Battery life
3 - Portability
4 - Aesthetics / construction / comfort

#4 would be affected by things like heat

If someone makes a laptop chip that uses 85W but it's quiet, cool to the touch, fast, light, small, and lasts 48 hours on a charge - do you care? Does it matter how they did it?

I know that's hyperbolic, but rather than focus on the things that feed into a result, the focus should be on the result itself. That is what I mean by getting lost in the 'weeds'.
 
  • Like
Reactions: Tlh97

Roland00Address

Platinum Member
Dec 17, 2008
2,038
150
106
I thought the M1 was an 8 core CPU?
M1 is 4 big cores + 4 little cores (Firestorm + Icestorm, the Firestorm use more power and are faster, Icestorm is the energy efficient ones that are also smaller amount of die compared to the "large" cores.)

AMD Ryzen 5600x is a 65w 6 core 12 threaded Zen 3 CPU (aka faster than any laptop Zen 2 chip which maxes out at 45w. Both more power and newer version than any AMD laptop shipping.)

The fact that a 65w 6 core (12 thread) cpu is competing with a 15w 4+4 core cpu is more a compliment to the 15w mobile chip.

With the 4 benchmarks he did besides geekbench (located here, it is in his youtube comments)

The AMD part is 20 to 45% faster. Honestly I would hope a desktop chip that can use 4.0+ x the power is faster. Especially since it is also a Nov Silicon release and not something that is 6, 12, 18 months old.

The m1 is an "ultrabook" chip, it should not be in the same range of classes as a desktop chip that has 4x the power based off the history of ultrabook chips from 2008 to Now. (First macbook air is why I use the year 2008, Intel launching the ultrabook intiative was 2012, based off the success of the 2010 macbook air that was the first model of macbook air that had ssd in all of their models while the 2008 only had ssds in some of their models.)

This is a "big deal."
 
  • Like
Reactions: guidryp

thunng8

Member
Jan 8, 2013
142
41
101
I thought the M1 was an 8 core CPU?

And a large percentage of modern desktop applications are optimized for multithreaded CPUs, ie browsers. Even mobile APUs are multicore these days, so nothing wrong with predominantly multithreaded benchmarks.



But it's fine when Geekbench scores which favor mobile CPUs show the M1 in a positive light :cool:
Talking about browsers, the M1 mops the floor on every single browser benchmark out there. Even real world, performance impressions of page load, scrolling and running multiple tabs show a major uplift in performance. Originally people thought it was because Safari was so well optimized, but guess what? Chrome was just compiled for the M1 and shows a similar superior score.


And you cannot dismiss all the other real world benchmarks out there. Lightroom and Premiere Pro running under Rosetta in many cases outperforms the best x86 processor out there.

There have been so many example of commerical software running faster on the M1. We aren't even cosidering battery life into the equation either. Approx 2X more battery life compared to Ryzen or Tiger Lake.

Sure it might not run Phoronix as fast, but there is still a lot of time for FOSS to optimize for the ARM architecture, like they have being doing for x86 over many many years.
Geekbench has issues and arguably shouldn’t be used as the primary benchmark by as many sites, but I’ve never seen compelling evidence that it actually “favors mobile CPUs”, despite constant claims to that effect. If you read the many, extended discussions on it by Torvalds and everyone at real world tech, for example, there is much debate about whether the selected workloads and means of testing are really representative of meaningful, real world workload, but no ones decrying it as a mobile-friendly benchmark, or arguing that it unfairly favors Apple.
Yes, he is grasing at straws trying to debunk a benchmark. If he doesn't like geekbench, why not use SpecCPU? It shows a remarkly correlation with geekbench - and the M1 at a per core level is very close to the top of the range 5950x while using multiple times less power.
 
  • Like
Reactions: Etain05 and guidryp

jeanlain

Member
Oct 26, 2020
104
75
61
Looks like M1's already limited I/O has some teething problems
Yes, early software can have bugs. It's way too early to conclude that these issues are due to supposed M1 limitations regarding I/O.
Regarding FCPX, if the M1 was limited in its throughput, it would be seen accross the board, not just in certain scenarios. Lots of people already use the M1 to edit their videos in 4k, and the overall message is that the M1 is excellent. As mentioned in the comments, there could be some issues with the particular sony codec used by this youtuber.
As for the 60Hz only via HDMI... The M1 can drive a 6K display at 60Hz via thunderbolt. I'm not sure why this guy has a problem with his monitor. I'm sure other monitors work fine at 60Hz without using HDMI. It can't be a limitation of the M1 per se.
As for USB speed... I don't know. This could also be related to driver issues. I'm sceptical that the USB-4 ports are somehow slower than the UBS-C/TB3 ports of the previous models.
 
  • Like
Reactions: IntelCeleron

Eug

Lifer
Mar 11, 2000
23,064
575
126
I feel for the poor sap who has to edit a 4k video with 16 GB of ram.

...

I think it was Linus Tech Tips complaining that Threadripper could only take 256 GB of ram, and that was just barely enough for his 4k video editing.
For some of the comparisons, FOR CERTAIN CONTENT, the M1 Mac was doing better than a 192 GB Mac Pro for 4K editing. YouTuber type 4K video editing, but 4K video editing nonetheless.

The big advantage here was that the M1 Mac was perfectly smooth in the actual editing process with butter smooth scrolling across the timeline, and perfectly clean playback. In contrast, the Mac Pro was stuttering through the same content.

As I mentioned before, it seems that Apple has purpose built the hardware accelerators to handle this sort of thing. It doesn't cover everything, and sometimes M1 fails hard once you hit 8K, but it's still remarkable what they've done... esp. considering this is a mobile SoC for ultrabooks.


Geekbench was used in Apple comparison, because it was the primary cross platform benchmark that would actually work on iPhones/iPads. I can't even think of anything else that easily fits the bill, unless you want to compile your own. Thus everyone used Geekbench. It's understandable that when it's pretty much all you have, you use it.

IMO scorn for Geekbench grew as performance of recent generations iPhone/iPad showed iPhone delivering desktop performance. The mindset developed that this was "Too good to be True" performance for a Smartphone SoC, and therefore Geekbench must be faulty.
Scorn for Geekbench began with its release, and lasted through Geekbench 3 IIRC. From what I gather, it gained a lot more respect with Geekbench 4 and then Geekbench 5.

The point you make is also probably true, but that is actually a later phenomenon.

Cinebench. I don't remember being prominent until Ryzen hit the scene. Since then it seems to be choice benchmark for AMD, and "AMD people" to show the core count advantage over Intel. If anything it's even less applicable than Geekbench to the real world. Geekbench is a composite benchmark. CB is a just one single task. In a way it's just one of the most simple embarrassingly parallel benchmarks out there.
Cinebench has been popular for just about forever even at AnandTech, at least when comparing Macs and Windows machines since it's cross platform and excludes the GPU. And yes, it's simple, which is actually one of its draws since anyone can run the benchmarks. In fact, there are databases out there which include scores of various Cinebench iterations, at stock clocks and overclocks.

Your experience with these benchmarks may be different though if for example you only started watching this stuff in say the last half-dozen years or so.
 

Eug

Lifer
Mar 11, 2000
23,064
575
126
Shifting away from kerfuffle about some people attacking benchmarks they don't like:

Here is a new Arstechnica review of MBA. They were declined a review sample laptops, so this is his personal machine he bought to replace an Intel MBA:

Apple’s M1 MacBook Air has that Apple Silicon magic
One of my top 3 features of all time for Mac laptops is MagSafe. It's such a shame they killed it.

BTW, I see a lot of consumers choosing the fanless throttle-risking MacBook Air over the MacBook Pro, just because of that irritating Touch Bar. Thank the gods though that Apple at least listened a bit and brought back the physical ESC key.
 
  • Like
Reactions: Tlh97 and moinmoin

Qwertilot

Golden Member
Nov 28, 2013
1,587
243
106
So let’s say the make a M1X chip for 16” with 8 big cores and a higher ram amount.

Do you think Apple would also use this same chip in the 13” pro even though it would be a different motherboard than the air and 13” M1 models?
Yes? Not the 13" pro which has the M1 in now, but they've got a load of 13" models with Intel in at the moment. Those are obviously getting replaced :)
 

Doug S

Senior member
Feb 8, 2020
768
1,085
96
How are they going to massacre everything when their multithreaded performance is so far behind?
Where are they "so far behind"? Unless you are comparing them against PCs with 8 big cores (and probably SMT enabled as well) they aren't far behind.

And the "massacre" would come when new generations of the M* come with more big cores, eventually scaling up to the Mac Pro in a couple years. I expect at least 32 big cores in the high end there, maybe more. You're going to have to be comparing with some awfully big (and expensive) x86 hardware to put that "far behind".
 

Qwertilot

Golden Member
Nov 28, 2013
1,587
243
106
That seems like wishful thinking. How many people are willing to switch OS to get faster performance?
Definitely not disagreeing with your basic conclusion but the M1 is giving these devices *much more* than extra CPU performance.

Passive, or near silent (mbp), running.
Considerable extra battery life.
Instant restart from sleep.
Enough iGpu to play games at a reasonable level.
Mildly cheaper.
etc

They're all non trivial selling points. No reason they shouldn't pick up some more market share.

Obviously with their using iPads for the 'cheaper' end of the market they won't get a massive percentage of overall laptop sales.
 

name99

Senior member
Sep 11, 2010
399
295
136
I thought the M1 was an 8 core CPU?
This is kind of dancing around the terminology if you ask me. If the smaller more efficient cores are being tapped during multithreaded workloads, then the M1 is an 8 core CPU no matter which way you slice it.
Well this is the difference between people whose goal is to understand technology and people whose goal isredacted



Inappropriate language for the tech forums.


esquared
Anandtech Forum Director
 
Last edited by a moderator:

ASK THE COMMUNITY