• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."
  • Community Question: What makes a good motherboard?

Question New Apple SoC - M1 - For lower end Macs - Geekbench 5 single-core >1700

Page 37 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

name99

Senior member
Sep 11, 2010
385
290
136
A Zen 3 core at peak spends 19W-20W. A Firestorm core at peak spends 5W. Ignoring the I/O die, apple cores are significantly more efficient at peak performance.
By the time Zen 4 comes out, it will be competing against M2.
Or M3...
Zen to Zen2 was ~2.5 years.
Zen2 to Zen3 was ~1.5 years

There MIGHT be a Zen3+ competing against the A15, but I'd expect for most of its lifetime, except perhaps for a few months, Zen4 will be competing against A16.
 

name99

Senior member
Sep 11, 2010
385
290
136
So, if Apple follows their normal processor development cadence, we'll see an A15 late next year, as well as an A14X/Z. The A14X/Z should be on N5, and the A15 should be on N5P.

I wonder what the M series will develop like? IT seems obvious that next year will see some sort of M2 processor to handle the mid market products. IT's likely that it will be on N5, as Apple has often stuck with a node through multiple A series versions in the past recently. It is possible that, instead, it will be on N5P. With N5P, and Apple moving to DDR5 (Samsung and others are already making phones that use DDR5), they could probably offer a substantial uplift in CPU and iGPU performance. I suspect that they will just scale up what they already have. Doubling the size of the iGPU and going with 8 HP cores, and just switching to higher speed DDR5 with the same arrangement, or perhaps double the channels, they should be able to dig deep into the workstation performance bracket,
TSMC started N5P testing/Risk production in 2Q2020.
Meaning that (and now that I think about it, it is so obvious!) the M1X will be on N5P!
THAT is what's determining its schedule.

There is a precedent. Remember that the A10 came out in Sept 2016, on 16nm. But the A10X came out in June 2017 on 10nm.

This makes so much sense. It gives time Apple to improve some of the rushed bits of the M1, and gives a free 5% speed boost (which isn't much, sure, but may mean that M1X clocks at, say 3.5GHz)
And gives Apple a second round of (totally justified!) "OMG, Apple is king, everyone else is doomed" publicity, say maybe in April or May, which should sustain them till the A15/iPhone next reveals in September.
 

amrnuke

Senior member
Apr 24, 2019
999
1,506
96
This is just a rehash of the old, "But, I can buy a cheaper Windows PC" lament.

Golf Clap...
The conversation was about whether $1000 is a great value for what you get from an MBA. If you don't want to participate in that discussion, and instead want to pigeon-hole people into camps, put words in their mouths, and then attack them for being in that camp and taking stances they're not taking, and be combative and oppositional -- whatever, enjoy your adolescence. If you want to discuss why you think $1000 is a great value for an MBA in the context of the current laptop market, then please redirect your response in that direction.

So what? Mac was doing quite well before M1.
Did I ever claim it wasn't?

M1 improves them drastically, in nearly every way.
Did I ever claim that it doesn't?

They are obviously going to do MUCH better with this drastically improved product for the same price (or less).
Did I ever claim that they wouldn't?

So they won't be in the sub $700 laptop market, no loss, they never were, and they don't need to be.
Did I ever say they would be, should be, claim to be, or need to be?

Who are you even talking to? The amount of strawman non-sequitor here is mind-boggling.
 
Last edited:

amrnuke

Senior member
Apr 24, 2019
999
1,506
96
Or M3...
Zen to Zen2 was ~2.5 years.
Zen2 to Zen3 was ~1.5 years

There MIGHT be a Zen3+ competing against the A15, but I'd expect for most of its lifetime, except perhaps for a few months, Zen4 will be competing against A16.
The macOS Arm environment should be better with more native apps, etc. next year, and I think it's possible they put out an M2 with the same core as the A15 and on N5P, but wouldn't be surprised if all they release next year is an M1X 8+4, even if still on N5, for the remainder of the MBP lineup -- and do a September/October 2022 refresh of all lines with whatever core powers the A16 across the board.

Regarding competing cores from AMD, Zen2 -> Zen2 refresh was about a year, I don't think it'd be unreasonable, given that AMD have been on N7 since mid 2019, that they could push out Zen3+/refresh on N5 next summer or perhaps sooner. It sure seems like all these releases are getting really staggered, and at any given moment, AMD or Apple or Intel have the lead in something different.

In any case, I thought this was interesting. The M1 has the ST crown in CB23 for low power chips - with a 26% lead over the 4800U (15W). If we assume Zen2->Zen3 performance difference in CB20 (+23%) carries over to mobile on same TDP, Cezanne would be within 3% of the M1. However, that's with a node disadvantage (N5 vs N7, 15% speed or 30% power).

One thing is for sure - a lot of this is speculation. But I think if you throw Zen3 and Firestorm on the same process, and with similar power limits, it could be interesting. We'll have to see. AMD are a step behind, with their laptop APUs lagging desktop by a few months as usual, and a step behind on node.

But even if Cezanne proves quite competitive, I don't see why it would change Apple's overall strategy. The M1 will still be competitive, and very well may remain the mobile king, even after Cezanne is released.
 

beginner99

Diamond Member
Jun 2, 2009
4,667
1,078
136
M1 improves them drastically, in nearly every way. They are obviously going to do MUCH better with this drastically improved product for the same price (or less).
The product also is drastically worse in some aspects like not supporting boot camp and since that was an official thing, it seemed to matter to some users. the people that actual look at benchmarks and understand the m1 implications can go either way: choose it because of drastic advantage or move away because of the drastic disadvantage.

Maybe these machines can reset some thinking around spec sheet worship.
Maybe but let's be honest. Final cut pro is their main piece to show how cool the m1 is. We have no iead how much dedicated hardware it uses and that will always be faster than using a CPU for the same task. All it shows is that having a fixed set of hardware and custom made software leads to superior results. which is not surprising really.
 

Eug

Lifer
Mar 11, 2000
23,002
521
126
The product also is drastically worse in some aspects like not supporting boot camp and since that was an official thing, it seemed to matter to some users. the people that actual look at benchmarks and understand the m1 implications can go either way: choose it because of drastic advantage or move away because of the drastic disadvantage.



Maybe but let's be honest. Final cut pro is their main piece to show how cool the m1 is. We have no iead how much dedicated hardware it uses and that will always be faster than using a CPU for the same task. All it shows is that having a fixed set of hardware and custom made software leads to superior results. which is not surprising really.
Boot Camp hasn’t been much of a draw for years now. Almost nobody cares these days. Much more important would be VM support so I’m surprised you didn’t even mention that.

BTW, it’s not just Final Cut. Video editing with LumaFusion has been silky smooth on iPad Pros for quite some time. I was doing stuff on my 2017 A10X model that made 2018 MacBook Pros struggle. Apple has been optimizing its software on Intel 1.5 decades but they could only do so much if the hardware support isn’t there. I suspect Apple felt hamstrung by Intel’s and AMD’s conflicting priorities and decided just to fix it themselves.

The iPad Pro A series chip proof of concept was very successful and I’m glad they brought that to the Mac, finally.
 
Last edited:
  • Like
Reactions: teejee

senttoschool

Golden Member
Jan 30, 2010
1,501
195
106
So how is gaming performance on these? A CPU is no good without a good GPU...
Vastly superior to AMD's best iGPU and Intel's Tigerlake.
Final Cut Pro on M1 continues to impress.
M1 MBP with 8 GB RAM vs 2019 12-core MP with 192 GB RAM and upgraded GPU.


View attachment 34195

View attachment 34196

Plus, the Mac Pro's timeline playback was all stuttery at that clip I linked, but the M1 played it perfectly.
But @DrMrLordX said the M1 must be slower than the 4900HS because it loses in Cinebench, which is just a niche product that Ryzen owners use to stretch their e-pe....

People don't use Cinebench on their laptop. They do video editing, web browsing, opening apps, transferring files, doing designs, etc. All of these things, the M1 wrecks Renoir in performance.

The M1 chip is the fastest laptop CPU you can buy, period. It's also the fastest laptop chip for the vast majority of the time, period.
 
Last edited:

senttoschool

Golden Member
Jan 30, 2010
1,501
195
106
The conversation was about whether $1000 is a great value for what you get from an MBA. If you don't want to participate in that discussion, and instead want to pigeon-hole people into camps, put words in their mouths, and then attack them for being in that camp and taking stances they're not taking, and be combative and oppositional -- whatever, enjoy your adolescence. If you want to discuss why you think $1000 is a great value for an MBA in the context of the current laptop market, then please redirect your response in that direction.
MBA at $1000 is a great value.

Yes, you can get 16GB of RAM and 512GB SSD, and maybe a low-end discrete GPU on a $1000 Windows PC.

But with the MBA, you're getting:

  • A higher resolution, higher quality screen
  • Better touchpad (Mac touchpads are the best, period)
  • Significantly faster CPU performance
  • Competent GPU performance (assuming $1000 PC gets you entry-level discrete GPU)
  • Significantly better battery life
  • Significantly better portability
  • Significantly better build quality (MBA is all all metal)
  • Significantly cooler and quieter laptop
  • Significantly better overall system responsiveness
Believe me, I've been trying to switch from Macbooks to Windows laptops because I absolutely despise the touchbar. I found that Macbooks were competitive with Windows laptops in terms of hardware and quality when they used Intel chips. Now Macbooks seem like a no brainer with Apple Silicon.

With Windows PCs, anytime you want a high quality, high brightness, high resolution screen that can match Macbook Retina screens, the price instantly increases to Macbook prices. This is why I haven't switched to Windows.

I've been trying to tell PC master race nerds for years that people who buy Apple products aren't idiots. Apple phones, tablets, and laptops hardware are genuinely worth the price.

The dumbest thing Apple has done to Macbooks is the touchbar, which increased the price and decreased user experience.
 
Last edited:

teejee

Senior member
Jul 4, 2013
345
177
116
Also, Intel is heavily leveraging it's AVX-512 instruction set with targeted instructions that accelerate AI, inferencing, machine learning etc and all the other stuff, so there are other ways of getting there without having to design and integrate custom hardware for these tasks.

.
AVX-512 has been out on the market for 5 years with very low adoption rate in SW compared to AVX2. So regardless of how good AVX-512 is, it doesn't give advantages to most users.

And rapidly decreasing market share among power users (to AMD) doesn't help for the future of AVX-512 either.
 

jeanlain

Member
Oct 26, 2020
41
17
36
Final Cut Pro on M1 continues to impress.
I'm always cautious when benchmarks involve exporting to H.264 and H.625, as the results heavily depend on the dedicated encoding hardware, which the Mac Pro may not have (at least, it doesn't have quick sync). Sure, in the end the M1 can be faster than the Mac Pro, but this doesn't necessarily reflect the performance of the M1 CPU cores.
 

senttoschool

Golden Member
Jan 30, 2010
1,501
195
106
I'm always cautious when benchmarks involve exporting to H.264 and H.625, as the results heavily depend on the dedicated encoding hardware, which the Mac Pro may not have (at least, it doesn't have quick sync). Sure, in the end the M1 can be faster than the Mac Pro, but this doesn't necessarily reflect the performance of the M1 CPU cores.
In the end, does it really matter? It looks like only ~20% of the M1 die is actually dedicated to the CPU. The rest are the GPU and accelerators.

If you want to compare CPU to CPU, you can though we already know what the CPU performance is like compared to Intel and AMD.

I think what's more interesting now is how the chip performs overall in actual common applications and how those applications take advantage of the neural engine and accelerators inside the M1.
 

jeanlain

Member
Oct 26, 2020
41
17
36
That's disingenuous. The M1 is designed for low power usage and long battery life. The 5950X (with the 20.6W single core power usage) is a 105W TDP chip. Expecting its single core power usage to be lower than the M1 is absurd, they're designed for totally different things.
I'm not sure to follow. There may a be a tradeoff between pure performance and efficiency, and you would have a point of the M1 was less powerful than the Zen 3 core. But it's not.
And do Ryzen laptop CPUs use a different design compared to desktop? I'm not aware of that. They just use lower frequency, perhaps less cache, binning. But the core design isn't radically different, is it?
Zen 3 laptop CPUs should be 20-30% more efficient than their predecessors, just like their desktop brethren. This won't be enough to match the M1.

Also, the M1 core is many times more power efficient than intel's TGL, which is a laptop part (the A14 uses 5W vs 20W for TGL to reach similar SPEC scores.)
 

Gideon

Golden Member
Nov 27, 2007
1,092
1,943
136
Or M3...
Zen to Zen2 was ~2.5 years.
Zen2 to Zen3 was ~1.5 years

There MIGHT be a Zen3+ competing against the A15, but I'd expect for most of its lifetime, except perhaps for a few months, Zen4 will be competing against A16.
It is realy impressive how well Apple executes.

But just to note, Zen to Zen 2 was an outlier when AMD was essentially broke. They have commited to a 12-15 month cadence (15 months in practice) and they've kept it so far. Zen 4 will be out in Q1 - early Q2 2022, not late 2022.
 

Carfax83

Diamond Member
Nov 1, 2010
5,972
792
126
A Zen 3 core at peak spends 19W-20W. A Firestorm core at peak spends 5W. Ignoring the I/O die, apple cores are significantly more efficient at peak performance.
By the time Zen 4 comes out, it will be competing against M2.
Yes, and a great deal of the M1's power efficiency comes from being on 5nm. When Zen 4 is on 5nm, AMD can get 30% less power for the same performance.

AVX-512 doesnt come close to dedicated accelerators for ML. Its like asking a CPU to run Graphics... why? Sure you can run it but GPUs do it better. You can run ML code on CPU but dedicated accelerators run it better. In the end of the day, no one cares which part it runs. If its faster, its faster.
My point is that CPUs will get faster and more efficient at these tasks as well. And the cost of R&D for developing and integrating custom hardware might eventually be seen as problematic as the average consumer just doesn't use those applications which leverages AI and machine learning on a regular basis.

Someone earlier brought up the analogy of PhysX. Many years ago, Ageia tried to sell PC gamers on a dedicated physics processing card in the P4 era. Back then, it wasn't seen as a strange idea because CPUs were so much weaker back then. A P4 could not run one instance of cloth physics simulation at an acceptable framerate for instance. Eventually Nvidia purchased Ageia and ported the API so that it could run on their GPUs so developers could implement mostly aesthetic effects like extra particles, cloth and smoke simulation. This typically called for getting a dedicated GPU PhysX card as attempting to run GPU PhysX on your main rendering card could tank performance to unacceptable levels.

Fast forward to the Core i7 era when CPUs got higher core counts, SMT, 256 bit SIMD and much higher IPC, suddenly CPUs were now powerful enough (in combination with physics software overhauls) to run all of the effects that hardware accelerated PhysX was capable of. In fact, cloth physics algorithms can run faster on modern CPU than they do on GPUs, or so I've read.

So now today, developers no longer care about implementing hardware accelerated physics in their games, and Nvidia has stopped trying to get consumers to buy dedicated GPU PhysX cards and PhysX runs primarily in software, which is actually much better for gameplay.

Renoir runs much higher than 15W. 15W is just the advertised TDP, which means little to nothing as to how much the chip actually spends. Unfortunately, renoir laptops are also quite rare and finding information on how much power they use is a nightmare, but I assure you when its running multicore benchmarks it doesnt run at 15W to achieve that performance.
The M1 sucks up power in multithreaded workloads as well, despite lots of technical advantages over Renoir as amrnuke alluded to.
 
  • Like
Reactions: Tlh97

Carfax83

Diamond Member
Nov 1, 2010
5,972
792
126
I think the file size is a function of bitrate. Puget Systems did a series of tests linked below. The summary of export performance (speed) is also below.
Here's the thing. Hardware accelerated encoding is very nice for real time or performance sensitive encoding. If I were going to stream a game on YouTube, I would certainly choose NVENC over anything else. But for movies, I would choose a software solution because if you want maximum quality and flexibility, offline encoding is the best.

It's similar to why offline rendering is still usually done on CPUs, even to this day.

They also address the quality issue, stating that you couldn't really see a difference except in low bitrate modes, unless you zoom in. If we're talking say iPhone 4k video, you can forget it. The 150Mbps source videos they used as example is impossible to create from an iPhone Pro Max, the max they record at is around 63Mbps. These are professional quality videos to start with.
They said this in that article:

One thing to keep in mind is that our testing was done with "VBR, 1 pass" since hardware encoding currently doesn't support 2 pass encoding. If you were to use 2 pass encoding with software encoding, the quality difference would be a bit more pronounced (although it would also take significantly longer).
 

nxre

Junior Member
Nov 19, 2020
15
16
36
Yes, and a great deal of the M1's power efficiency comes from being on 5nm. When Zen 4 is on 5nm, AMD can get 30% less power for the same performance.
No. 7nm to 5nm only gives a 10% efficiency improvement for the same performance (Looking at the Kirin 9000 vs the Snapdragon 865+, we’re seeing a 10% reduction in power at relatively similar performance. Both chips use the same CPU IP, only differing in their process node and implementations - https://www.anandtech.com/show/16226/apple-silicon-m1-a14-deep-dive/3)
The M1 sucks up power in multithreaded workloads as well, despite lots of technical advantages over Renoir as amrnuke alluded to.
On cinebench, which seems to be the benchmark that most favours Zen 3, M1 uses 15W in MC and 3,8W in SC to achieve its peak results. Would be curious to see how much Renoir spends to achieve its MC results.
 

nxre

Junior Member
Nov 19, 2020
15
16
36
We have no good information on how Zen 3 cores scale down beyond the 5600X core which is still very much performance oriented rather than efficiency oriented, and it very well could be the case that AMD beats, is roughly equal to, or loses to Apple at that power threshold. We just don't know yet.
M1 achieves its peak SC performance at 5W. 5800U Cezanne, using Zen 3 cores, peak performance will come at 4,4Ghz, which looking at Zen 3 5600X power draw, seems to be around 10-11W. How will exactly AMD overcome this deficit, realistically? And, most importantly, why would they? Laptops are not close to being a big market to them.
 

Gideon

Golden Member
Nov 27, 2007
1,092
1,943
136
Using Zen 3 cores, peak performance will come at 4,4Ghz which looking at Zen 3 5600X power draw, seems to be around 10-11W.
They will probably clock a bit higher (ST), looking a the diff between Renoir and Matisse desktop vs Vermeer. But you are right about power draw, they certainly won't make up the difference.

While I doubt AMD will reach anywhere near M1 efficiency, let's not forget that going to 7nm AMD doubled perf/watt (with Less than half coming from the process, according to them) And with Zen 3 they managed to improve power-efficiency by another 20% on the same node. Considering their trajectory and what patents reveal about Zen 4 I'd be surprised if Zen 4 doesn't offer at least 50% improvement in that category. Leaks saying Genoa is 96 Cores, seems to hint at it as well ( they can't really increase the TDP, as it would require water cooling).

Laptops are the second biggest market for them, after servers.
And both really need perf/watt.
 
  • Like
Reactions: Tlh97 and coercitiv

Eug

Lifer
Mar 11, 2000
23,002
521
126
I'm always cautious when benchmarks involve exporting to H.264 and H.625, as the results heavily depend on the dedicated encoding hardware, which the Mac Pro may not have (at least, it doesn't have quick sync). Sure, in the end the M1 can be faster than the Mac Pro, but this doesn't necessarily reflect the performance of the M1 CPU cores.
Of course it’s hardware accelerated. That was my whole point. Apple has built in excellent hardware acceleration, and leveraged it in such a way that it is a vastly superior user experience. Unfortunately, QuickSync just doesn’t cut it.

AMD based hardware support is better than QuickSync and Apple does support it on the Mac Pro (but not MacBook Pro), at least in some workflows.
 

nxre

Junior Member
Nov 19, 2020
15
16
36
While I doubt AMD will reach anywhere near M1 efficiency, let's not forget that going to 7nm AMD doubled perf/watt (with Less than half coming from the process, according to them) And with Zen 3 they managed to improve power-efficiency by another 20% on the same node. Considering their trajectory and what patents reveal about Zen 4 I'd be surprised if Zen 4 doesn't offer at least 50% improvement in that category. Leaks saying Genoa is 96 Cores, seems to hint at it as well ( they can't really increase the TDP, as it would require water cooling).
I wish we had good benchmarks on performance per watt, its getting extremely hard to compare when we don't even have matching data sets between chipsets and all we can do is guess some numbers.
 

DrMrLordX

Lifer
Apr 27, 2000
16,641
5,644
136
Apple could probably made this slower in all aspects and on 7nm and sell it for $599 and it would be a much, much bigger win, financially and user-base wise. Les so tech-wise.
Apple's got a closed ecosystem user-wise. They can afford to tinker with profit/loss and maybe bump prices up a bit just to keep the ball rolling. Having strong technology locked up in their ecosystem just makes it more likely that they can drag people into their way of doing things. Once you go Mac, it's hard to get out for a variety of reasons. There really isn't any need for them to lower prices and use an older node, when they can just raise prices and use the bleeding edge.

But @DrMrLordX said the M1 must be slower than the 4900HS because it loses in Cinebench, which is just a niche product that Ryzen owners use to stretch their e-pe....
I see someone's still upset that he got caught mis-using hyperbole. Be more careful about what you say next time.

They do video editing, web browsing, opening apps, transferring files, doing designs, etc. All of these things, the M1 wrecks Renoir in performance.
Actually, I think if people did enough of those things all at once, that the 4900H (and maybe HS) would stand a chance of stacking up pretty well against the M1, albeit at much worse battery life/higher power draw.

The M1 chip is the fastest laptop CPU you can buy, period. It's also the fastest laptop chip for the vast majority of the time, period.
That's . . . contradictory. Is English your first language? Honest question.
 

ASK THE COMMUNITY