Discussion Quo vadis Apple Macs - Intel, AMD and/or ARM CPUs? ARM it is!

Page 11 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

moinmoin

Diamond Member
Jun 1, 2017
4,956
7,675
136
Due to popular demand I thought somebody should start a proper thread on this pervasive topic. So why not do it myself? ;)

For nearly a decade now Apple has treated their line of Mac laptops, AIOs and Pro workstations more of a stepchild. Their iOS line of products have surpassed it in market size and profit. Their dedicated Mac hardware group was dissolved. Hardware and software updates has been lackluster.

But for Intel Apple clearly is still a major customer, still offering custom chips not to be had outside of Apple products. Clearly Intel is eager to at all costs keep Apple as a major showcase customer.

On the high end of performance Apple's few efforts to create technological impressive products using Intel parts increasingly fall flat. The 3rd gen of MacPros going up to 28 cores could have wowed the audience in earlier years, but when launched in 2019 it already faced 32 core Threadripper/Epyc parts, with 64 core updates of them already on the horizon. A similar fate appears to be coming for the laptops as well, with Ryzen Mobile 4000 besting comparable Intel solutions across the board, with run of the mill OEMs bound to surpass Apple products in battery life. A switch to AMD shouldn't even be a big step considering Apple already has a close work relationship with them, sourcing custom GPUs from them like they do with CPUs from Intel.

On the low end Apple is pushing iPadOS into becoming a workable mutitasking system, with decent keyboard and, most recently, mouse support. Considering the much bigger audience familiar with the iOS mobile interface and App Store, it may make sense to eventually offer a laptop form factor using the already tweaked iPadOS.

By the look of all things Apple Mac products are due to continue stagnating. But just like for Intel, the status quo for Mac products feels increasingly untenable.
 
  • Like
Reactions: Vattila

Hitman928

Diamond Member
Apr 15, 2012
5,324
8,015
136
Am I missing something here?
Yes, Rate-1. Anand is measuring single-thread performance with spec, there's no multithreaded tests at all.

Just for clarification, speed test runs 1 instance of the test which you can then choose how many threads are available for the program to use. Rate runs only single threaded but then you can choose how many instances of the test are run at the same time to test more cores. In general, I think speed with multithreading would be more interesting for consumer oriented systems where the user is usually only running 1 intensive thing at a time and doesn't have workloads that are able to scale well across many cores. Rate is more interesting for servers where you are typically running 1 thing that scales really well across many cores or are running less threaded workloads but many of them at the same time. Obviously individual use case may differ. Anyway, if you run Spec speed without multithreading and Spec rate with only one instance, you are basically running the same single threaded test.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
Am I missing something here?

You missed, that these were rate-1 benchmarks...essentially single thread. Of course now i can see where your confusion and distrusting SPEC is coming from - you were assuming these were multi-threaded benchmarks were an R3900X should be much faster.
 
  • Like
Reactions: Etain05

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,573
14,526
136
One thing that I really hate. Having to depend on 14 year old benchmarks for ARM devices, and then having users tell us how valid they are.\

@Etain05 , SO 14 YEAR OLD BENCHMARKS ARE SUPPOSED TO BE GOOD ? THATS WHY YOU DOWNVOTED ME ?
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Anandtech uses Spec2006 for mobile CPU tests because they have to rebuild all the tests to run on Android/Apple if the test doesn't already exist for them and they've already done this for Spec2006 but not for Spec2017. I'm sure rebuilding them for Apple is the bigger hurdle.

I figured it was something like this. I just found it odd that they were using such an old benchmark to begin with. According to Spec themselves, the benchmark has been retired.
 

Glo.

Diamond Member
Apr 25, 2015
5,711
4,559
136
Probably the best forum around for technical content. horrible for navigation.
Yep.

Bringing RWT Forums to the topic of Apple ARM chips. David Kanter some time ago, on that forum spoken to one of the internet strogmans, and ARM supporter, that he talked with Apple engineers and they blatantly said that in order to increase the clock speeds, they would need to provide the SoC with sufficient enough Amperage, and that is/was impossible, at the time. And we are not talking about 80 Amps of power, we are talking about even 40 amps of power.

No idea how is that possible, or how that has changed, right now, but looking at the fact that 5 nm chips, with 8/4 Core config still will be under 7W's of power I would not hold my breath.
 
  • Like
Reactions: lightmanek

Doug S

Platinum Member
Feb 8, 2020
2,269
3,522
136
Haha. So the guy, who brings you the topic of Apple releasing ARM based computer and a laptop is suddenly also wrong on the topic of Apple being required to use more cores to compete with Intel, because Apple designs are not powerful enough.

If that is the case, that Gurman is wrong, he is Also wrong on the topic of Apple switching from Intel to their own ARM chips ;).

Jesus. This is getting ridiculous.

If he was the only source of this rumor then I wouldn't give it any credence. But these rumors have been around for a long time, he's just jumping on the bandwagon. It isn't as if that article contained ANY original reporting.
 

Glo.

Diamond Member
Apr 25, 2015
5,711
4,559
136
If he was the only source of this rumor then I wouldn't give it any credence. But these rumors have been around for a long time, he's just jumping on the bandwagon. It isn't as if that article contained ANY original reporting.
Kek.

This has got ridiculous.

No my friend. EVERYBODY has been repeating Gurman on his info, since 2018, because that was the first time when he first broke the news that Apple is dumping Intel. Then he broke the news about the Project Catalyst, when even in well informed Apple world the likes of the well informed people neglected and disbelieved him on this front, which was the reason why there was quite a little storm in Apple world.

His article has ALL of its information from sources. Not opinions. Sources. If Gurman says that 8/4 SoC cannot compete with Intel x86 CPUs its that because he got this info from Apple engineers. Its that simple, why deflect this? If he says, that 8/4 SoC is for Entry level products, its because that is the info he got from his Apple sources. Why is it so hard to believe in his info? Confirmation Bias?

Its laughable for me, that when some part of Gurman's info, that is not fitting people beliefs about Apple products makes his info less credible, and the diminishing of his credibility is in full swing.

And lastly. Gurman in his Articles does not state opinions. He states facts, unless he is supposed to do an editorial for Bloomberg. But he is not doing them.
 
Last edited:

Doug S

Platinum Member
Feb 8, 2020
2,269
3,522
136
I figured it was something like this. I just found it odd that they were using such an old benchmark to begin with. According to Spec themselves, the benchmark has been retired.

They always "retire" the last generation benchmark. A few years back Anandtech used SPEC2000 on smartphones, because a few of SPEC2006's tests required more RAM than was available on some models. Once they met the baseline (2GB or whatever) they updated to SPEC2006, but meanwhile SPEC had introduced SPEC2017.

The reason for the 1x runs on SPEC2017 is again to hold down memory requirements. If you run 4x or 8x to use all available threads it requires much more memory and few if any smartphones would be able to manage it.

Most people reading Anandtech weren't around the workstation world in the 90s and don't know where SPEC comes from. It was designed as a performance test for workstations - the old school RISC workstations from back when x86 was seen as just a toy the way some PC heads see smartphones these days. That's why SPECfp is full of Fortran code, and it heavily slants scientific. It isn't designed to test the UI, and in fact runs from a command line without any UI at all! It is designed to test the CPU and memory system running the sort of code scientists and engineers are likely to run.

Unfortunately there really are NO benchmarks that really test the UI. But I'm not sure what the point is really is - if such benchmarks existed you'd see a lot of the fluff (image transitions, fade in/fade out and that sort of thing) removed to make them perform better on benchmarks. I agree with Linus that the best single benchmark to look at to approximate typical modern application performance is gcc/llvm. You can't pull any tricks to make a compiler's code run faster, the code won't fit in cache, branches aren't easy to predict, and you certainly can't accelerate it with SIMD or other special instructions. If one CPU is 2x faster than another in gcc, it will be faster in regular usage. There's no way to "cheat the system" in those benchmarks like is done all the time in gaming benchmarks.
 

coercitiv

Diamond Member
Jan 24, 2014
6,214
11,959
136
If he was the only source of this rumor then I wouldn't give it any credence. But these rumors have been around for a long time, he's just jumping on the bandwagon. It isn't as if that article contained ANY original reporting.
Then we need to make a choice here, since it was the Bloomberg report that finally turned many around on accepting that Apple is moving ARM up their product stack. (even if slowly)

I'm not a fan of the way @Glo. handles discussion/speculation on upcoming products, but on this issue I agree 100%: either we take the Bloomberg report entirely into consideration, or we discard it for being sketchy. Picking and choosing sounds less like validating leaks and more like having the cake and eating it too.
 

Glo.

Diamond Member
Apr 25, 2015
5,711
4,559
136
Then we need to make a choice here, since it was the Bloomberg report that finally turned many around on accepting that Apple is moving ARM up their product stack. (even if slowly)

I'm not a fan of the way @Glo. handles discussion/speculation on upcoming products, but on this issue I agree 100%: either we take the Bloomberg report entirely into consideration, or we discard it for being sketchy. Picking and choosing sounds less like validating leaks and more like having the cake and eating it too.
If Apple rumors are reported by Gurman, they are not rumors anymor. They are facts. And will manifest in the world, because Gurman has got confirmation from his sources, on the topic, and he got more info to put into his articles.

Gurman is one those journalists that actually have Apple sources. The other one leaker that currently is getting a lot of traction is Jon Prosser. Previously there were Jim Dalrymple and the other guy from The Loop that I cannot his name right now.

Gurman does not post Editorials. He simply posts Articles, based on what he got from his own Apple sources.

If he is saying that 8/4 SoC chips is not able to compete with x86 Intel chips, it is a fact, and there is zero point in neglecting it, especially - considering that those products in which those ARM chips will land, will be below higher performing products, from product segmentation perspective.

There is a reason why MacBook will sit below MacBook Air, and Entry Level iMac, will get those chips, while the rest of their lineup will get x86.

Its because contrary to popular belief, even Apple's ARM is not ready to replace x86 for highest performance.
 

amrnuke

Golden Member
Apr 24, 2019
1,181
1,772
136
The reason for the 1x runs on SPEC2017 is again to hold down memory requirements. If you run 4x or 8x to use all available threads it requires much more memory and few if any smartphones would be able to manage it.
This speaks loudly.

Most people reading Anandtech weren't around the workstation world in the 90s and don't know where SPEC comes from. It was designed as a performance test for workstations - the old school RISC workstations from back when x86 was seen as just a toy the way some PC heads see smartphones these days. That's why SPECfp is full of Fortran code, and it heavily slants scientific. It isn't designed to test the UI, and in fact runs from a command line without any UI at all! It is designed to test the CPU and memory system running the sort of code scientists and engineers are likely to run.

Unfortunately there really are NO benchmarks that really test the UI. But I'm not sure what the point is really is - if such benchmarks existed you'd see a lot of the fluff (image transitions, fade in/fade out and that sort of thing) removed to make them perform better on benchmarks. I agree with Linus that the best single benchmark to look at to approximate typical modern application performance is gcc/llvm. You can't pull any tricks to make a compiler's code run faster, the code won't fit in cache, branches aren't easy to predict, and you certainly can't accelerate it with SIMD or other special instructions. If one CPU is 2x faster than another in gcc, it will be faster in regular usage. There's no way to "cheat the system" in those benchmarks like is done all the time in gaming benchmarks.
Thanks for running through this. Very insightful.
 

naukkis

Senior member
Jun 5, 2002
706
578
136
Bringing RWT Forums to the topic of Apple ARM chips. David Kanter some time ago, on that forum spoken to one of the internet strogmans, and ARM supporter, that he talked with Apple engineers and they blatantly said that in order to increase the clock speeds, they would need to provide the SoC with sufficient enough Amperage, and that is/was impossible, at the time. And we are not talking about 80 Amps of power, we are talking about even 40 amps of power.

There's some restrictions when SOC is in phone.....

No idea how is that possible, or how that has changed, right now, but looking at the fact that 5 nm chips, with 8/4 Core config still will be under 7W's of power I would not hold my breath.

If they put that same SOC to laptop or desktop makes it possible to provide more power to run that same SOC to higher frequencies. And where you got that idea that Apple keeps that 12-core design chip under 7W power envelope?
 

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
If they put that same SOC to laptop or desktop makes it possible to provide more power to run that same SOC to higher frequencies. And where you got that idea that Apple keeps that 12-core design chip under 7W power envelope?
It's really not that simple. Your explanation is like 'Santa can't exist because elves are not real'.
 
  • Haha
Reactions: lightmanek

naukkis

Senior member
Jun 5, 2002
706
578
136
It's really not that simple. Your explanation is like 'Santa can't exist because elves are not real'.

Have you ever overclocked anything? Poor vrm will decrease clocking potential - not only max current limit but lightweight regulation won't provide stable enough voltage to extract max clocking potential from given voltage. I'm pretty sure that vrm in phone limits soc clocking potential pretty severely.
 

Glo.

Diamond Member
Apr 25, 2015
5,711
4,559
136
There's some restrictions when SOC is in phone.....



If they put that same SOC to laptop or desktop makes it possible to provide more power to run that same SOC to higher frequencies. And where you got that idea that Apple keeps that 12-core design chip under 7W power envelope?
It was not impossible because of the phone SoC limitations. It was impossible because of PROCESS NODE and PHYSICAL DESIGN LIMITATIONS.

That is what Kanter meant. You guys have to realize that ALL of Apple designs are always on high-density, extremely low-power nodes. Physical design, and architecture design are always trade offs. Im not competent enough to talk about it, but even for me its blatantly obvious, that if Apple would be able to build fast, powerful ARM chips, and built a laptop - they would do it YEARS ago. They haven't. Why?

Because technology was limiting them. Right now Apple is readying entry level products, that are BARELY enough for desktop ecosystem needs, and that still lose to x86 parts. But yeah, suddenly Overclocking the chip will do, which is laughable idea in itself.
Have you ever overclocked anything? Poor vrm will decrease clocking potential - not only max current limit but lightweight regulation won't provide stable enough voltage to extract max clocking potential from given voltage. I'm pretty sure that vrm in phone limits soc clocking potential pretty severely.
You are mistaking cause with effect. Overclocking capabilities are ALWAYS defined by physical and architectural design decisions. Its not as Simple as providing the CPU with more Amps, because that capability is defined on the drawing board, by physical design engineers.

If Apple engineers were not capable to provide those SoC's with more AMPs its because of Architectural and physical design decisions, and process node capabilities. There is a good reason why nobody in the industry makes non-mobile chips on mobile process. And there is a reason why everybody in the industry are making mobile chips on mobile process. That 5 nm process we are talking about is still mobile process. It won't change.
 

dacostafilipe

Senior member
Oct 10, 2013
772
244
116
Then we need to make a choice here, since it was the Bloomberg report that finally turned many around on accepting that Apple is moving ARM up their product stack. (even if slowly)

The Apple Mac ARM migration rumour is nothing new. It started as soon as Apple released the first in-house SoC and accelerated when Apple outperformed every other ARM SoC year after year.

And as soon as some big site posts a new article about the rumour, there's a lot of activity in the forums and after a while, nobody talks about it.

Just as a reminder, a 10sec Google search:

26 June 2019 : Apple Hires Key Chip Designer From ARM as Own Efforts Ramp Up

Apple Inc. hired one of ARM Holdings Inc.’s top chip engineers as the iPhone maker looks to expand its own chip development to more powerful devices, including the Mac, and new categories like a headset.

2 April 2018 : Apple Plans to Use Its Own Chips in Macs From 2020, Replacing Intel

Apple Inc. is planning to use its own chips in Mac computers beginning as early as 2020, replacing processors from Intel Corp., according to people familiar with the plans.

1 February 2017 : Apple Said to Work on Mac Chip That Would Lessen Intel Role
Apple started exploring a shift away from Intel processors five years ago partly to improve laptop power efficiency, Bloomberg News has reported. The new chip may first become available in an upgraded version of the MacBook Pro laptop planned for later this year, the people said.

After another, 10sec Google search, I've found the mention of Firestorm :

Mar 24, 2020
A14 is codenamed Firestorm

So, what exactly in the Bloomberg article is "news" ?
 

Glo.

Diamond Member
Apr 25, 2015
5,711
4,559
136
So, what exactly in the Bloomberg article is "news" ?
Its funny, because you ask what is in the Bloomberg article news, while also providing... Bloomberg Articles, with Mark Gurman being the author of them. So Gurman has been the author of ALL of reports of Apple working ARM chips for Mac.

P.S. Its interesting that A14 codename is FireStorm.
 

coercitiv

Diamond Member
Jan 24, 2014
6,214
11,959
136
Just as a reminder, a 10sec Google search:
Look at all the Bloomberg articles you're quoting, all written by the same Mark Gurman and Ian King. If we invalidate their latest article, we invalidate all of them. As I wrote before, saying the author does not have good info on the subject has a very nasty butterfly effect.
 
Last edited:
  • Like
Reactions: Tlh97 and Glo.

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Have you ever overclocked anything? Poor vrm will decrease clocking potential - not only max current limit but lightweight regulation won't provide stable enough voltage to extract max clocking potential from given voltage. I'm pretty sure that vrm in phone limits soc clocking potential pretty severely.

I thought the frequency/voltage curve graph for the A12 invalidated that argument:

a12-fvcurve.png
 

Glo.

Diamond Member
Apr 25, 2015
5,711
4,559
136
I thought the frequency/voltage curve graph for the A12 invalidated that argument:

a12-fvcurve.png
Interesting. So the whole SoC at 1.0v has under 4 Amps of current available.

Jesus. Getting it to 40 Amps is a mountain to climb compared to what I initially thought.

Just this diagram explains to me fully why Apple would be required to tackle Intel(and AMD) with more cores, than Intel can offer in their CPUs, instead of increasing clock speeds.

Extreme parallelization, because of Core design limitations.
 

Glo.

Diamond Member
Apr 25, 2015
5,711
4,559
136
Look at all the Bloomberg articles you're quoting, all written by the same Mark Gurman and Ian King. If we invalidate their latest article, we invalidate all of them. As I wrote before, saying the author does not have good info on the subject has a very nasty butterfly effect.
Lets also look at the dates of since when Gurman is reporting those information.

Years in the making, they manifest themselves finally.