Discussion Quo vadis Apple Macs - Intel, AMD and/or ARM CPUs? ARM it is!

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

moinmoin

Diamond Member
Jun 1, 2017
4,934
7,619
136
Due to popular demand I thought somebody should start a proper thread on this pervasive topic. So why not do it myself? ;)

For nearly a decade now Apple has treated their line of Mac laptops, AIOs and Pro workstations more of a stepchild. Their iOS line of products have surpassed it in market size and profit. Their dedicated Mac hardware group was dissolved. Hardware and software updates has been lackluster.

But for Intel Apple clearly is still a major customer, still offering custom chips not to be had outside of Apple products. Clearly Intel is eager to at all costs keep Apple as a major showcase customer.

On the high end of performance Apple's few efforts to create technological impressive products using Intel parts increasingly fall flat. The 3rd gen of MacPros going up to 28 cores could have wowed the audience in earlier years, but when launched in 2019 it already faced 32 core Threadripper/Epyc parts, with 64 core updates of them already on the horizon. A similar fate appears to be coming for the laptops as well, with Ryzen Mobile 4000 besting comparable Intel solutions across the board, with run of the mill OEMs bound to surpass Apple products in battery life. A switch to AMD shouldn't even be a big step considering Apple already has a close work relationship with them, sourcing custom GPUs from them like they do with CPUs from Intel.

On the low end Apple is pushing iPadOS into becoming a workable mutitasking system, with decent keyboard and, most recently, mouse support. Considering the much bigger audience familiar with the iOS mobile interface and App Store, it may make sense to eventually offer a laptop form factor using the already tweaked iPadOS.

By the look of all things Apple Mac products are due to continue stagnating. But just like for Intel, the status quo for Mac products feels increasingly untenable.
 
  • Like
Reactions: Vattila

Glo.

Diamond Member
Apr 25, 2015
5,661
4,419
136
Indeed its from the compiler - as i said, the OS has minimal impact. Use the same compiler and you should have about the same scores. The ABI for AArch64 in Linux and Windows is slightly different as well - so the generated code might have different register allocation even when using the same compiler.
And how is that different than different compilers for iOS and Windows x86? You wanted to disprove, and yet you proven that in fact there are diffeences between PLATFORMS you test on, that yield better performance on iOS.

Hint: comparison was between iOS A12X and Ryzen 7 4700U scores in GB. I hope you get it now.
 

amrnuke

Golden Member
Apr 24, 2019
1,181
1,772
136
Indeed its from the compiler - as i said, the OS has minimal impact. Use the same compiler and you should have about the same scores. The ABI for AArch64 in Linux and Windows is slightly different as well - so the generated code might have different register allocation even when using the same compiler.
To bring it full circle, since we know there are differences between compilers on different OS, I think it would be interesting to see how Xcode-compiled on macOS compares to Clang/LLVM on Win10/Ubuntu - all on the same Macbook. Michael Larabel did an Xcode vs GCC over on Phoronix but the variance in compiler is too big to make a call.
 

DisEnchantment

Golden Member
Mar 3, 2017
1,590
5,722
136
To bring it full circle, since we know there are differences between compilers on different OS, I think it would be interesting to see how Xcode-compiled on macOS compares to Clang/LLVM on Win10/Ubuntu - all on the same Macbook. Michael Larabel did an Xcode vs GCC over on Phoronix but the variance in compiler is too big to make a call.
The question now would be, did the same Macbook get a higher GB score running macOS compared to when it was running Linux/Win10?

Problem with Open source software like Linux/gcc/LLVM and co is that every Tom Dick and Harry IHV/ISV add patches to it, and to upstream anything you have to ensure that it does not break anything from others even if it means degraded performance. Not that the alternatives are any better.
 

DrMrLordX

Lifer
Apr 27, 2000
21,583
10,785
136
GB3 and earlier had really large differences between OSs. GB4 and 5 are much better in this regard but there is still a significant difference in my experience and in all the testing I've seen.


GB5 should offer zero advantage to Linux now.
 

Richie Rich

Senior member
Jul 28, 2019
470
229
76
And how is that different than different compilers for iOS and Windows x86? You wanted to disprove, and yet you proven that in fact there are diffeences between PLATFORMS you test on, that yield better performance on iOS.

Hint: comparison was between iOS A12X and Ryzen 7 4700U scores in GB. I hope you get it now.
There will be always some difference. Apple A12 Vortex core has 65% higher IPC and that is the main factor here. A12 is fast due to excellent super wide core design. ARM Holding, AMD and Intel are about 4 years behind Apple in development. That's all.
 

DrMrLordX

Lifer
Apr 27, 2000
21,583
10,785
136
There will be always some difference. Apple A12 Vortex core has 65% higher IPC and that is the main factor here. A12 is fast due to excellent super wide core design. ARM Holding, AMD and Intel are about 4 years behind Apple in development. That's all.

Still waiting for you to compile some command-line open source benchmarks for iOS and show us what it can do outside of Geekbench and SPEC.
 

Hitman928

Diamond Member
Apr 15, 2012
5,182
7,633
136

GB5 should offer zero advantage to Linux now.

Should and does are often different things. I already posted a link to Phoronix doing the comparison and showing GB5 performing 10% faster on Linux than Windows. You can also look up the Linux vs Windows tests for the same CPUs on their results browser and see that Linux consistently scores higher. Again, they use different compilers on each OS so there's going to be performance differences just from that.

Edit:

Here's another one. It's just a random youtuber with very low production quality but it again reflects my own testing and what I've seen from others.

 
Last edited:

Glo.

Diamond Member
Apr 25, 2015
5,661
4,419
136
Should and does are often different things. I already posted a link to Phoronix doing the comparison and showing GB5 performing 10% faster on Linux than Windows. You can also look up the Linux vs Windows tests for the same CPUs on their results browser and see that Linux consistently scores higher. Again, they use different compilers on each OS so there's going to be performance differences just from that.

Edit:

Here's another one. It's just a random youtuber with very low production quality but it again reflects my own testing and what I've seen from others.

https://www.youtube.com/watch?v=P2dACq3F_W4
https://www.youtube.com/watch?v=9GpBsfdPmP0

Different youtubers, different hardware, Blender and DaVinci Resolve comparisons between Windows and Linux.

Its a pattern. Platforms do matter no matter how outlandish claims people will post that Apple ARM CPUs have 65% IPC advantage over compatitors.
 

Hitman928

Diamond Member
Apr 15, 2012
5,182
7,633
136
@Hitman928

https://forums.anandtech.com/threads/geekbench-5-vs-geekbench-4.2569654/post-39925571

???

That .pdf says they use the same compiler now. Also people on this forum tested them on different OS with the same hardware, and got the same results each time.

Not much to go off of in that post and both of his links for scores go to the same test on Windows, so hard to comment on that but his stated scores still have a 5% advantage for Linux.

Looks like I was mistaken though on using different compilers between Windows and Linux, seems like with GB5 they are now using Clang for both. Android and iOS/MacOS still get their custom compilers though.
 

DrMrLordX

Lifer
Apr 27, 2000
21,583
10,785
136
Not much to go off of in that post and both of his links for scores go to the same test on Windows, so hard to comment on that but his stated scores still have a 5% advantage for Linux.

He recorded at ~1% advantage in ST for Linux and 5% advantage in MT for Win10.

Looks like I was mistaken though on using different compilers between Windows and Linux, seems like with GB5 they are now using Clang for both.

His link to his linux results is broken. Sadly.

Anyway as lovely as is GB5 and the like, it would be great to see some more cross-platform benchmarks for iOS that could show us the true power of A12x or A13. Too bad Apple makes that such a pita.
 

scannall

Golden Member
Jan 1, 2012
1,944
1,638
136
Here is a more Apples to Apples comparison.

Geekbench 5 Apple MacBook Pro I5-8279u vs iPhone 11 A13.

iOS is OS X, with touch interface and the desktop only bits removed, using the same compilers etc. So it should be a fairly representative comparison. The single thread A13 is roughly 22% faster than the I5. At a much lower power draw. While that's certainly awesome, it isn't 65%.
 

Hitman928

Diamond Member
Apr 15, 2012
5,182
7,633
136
He recorded at ~1% advantage in ST for Linux and 5% advantage in MT for Win10.



His link to his linux results is broken. Sadly.

Anyway as lovely as is GB5 and the like, it would be great to see some more cross-platform benchmarks for iOS that could show us the true power of A12x or A13. Too bad Apple makes that such a pita.

I figured that his Linux score was the higher one because he said that Linux didn't perform much better this time.

Anyway, I just ran it on my work box with a 2700 at stock both in Linux and Windows and Linux showed a 9.7% - 15% performance advantage depending on ST or MT.

 

amrnuke

Golden Member
Apr 24, 2019
1,181
1,772
136
Here is a more Apples to Apples comparison.

Geekbench 5 Apple MacBook Pro I5-8279u vs iPhone 11 A13.

iOS is OS X, with touch interface and the desktop only bits removed, using the same compilers etc. So it should be a fairly representative comparison. The single thread A13 is roughly 22% faster than the I5. At a much lower power draw. While that's certainly awesome, it isn't 65%.
The only issue is that to appease Richie Rich, the raw score isn't good enough, you have to apply GHz to obtain the GB5 scaled score (we should NOT being calling it IPC). Never mind that no one computes at a fixed GHz on a single thread, but that is what the audience (the guy you're responding to) demands.

i5 @ 4.1 GHz boost
A13 @ 2.67 GHz boost

So i5 ST scaled score is 1094 / 4.1 GHz = 266.8 GB5 per GHz
And A13 ST scaled score is 1336 / 2.67 GHz = 500.4 GB5 per GHz

The A13 makes the Intel chip look downright pedestrian.
One could postulate that the 4700U or 4900HS would destroy the i5, but until it's in a Macbook we won't know.

Granted, who cares, because while overall ST difference is 22% in favor of A13 and MT difference is 28% in favor of i5, no one uses GB5 to do work.

And also, the i5 can actually do real productivity work, today. The A13 is limited to the iPhone. So it's mostly a technical argument.
 

Richie Rich

Senior member
Jul 28, 2019
470
229
76
i5 @ 4.1 GHz boost
A13 @ 2.67 GHz boost

So i5 ST scaled score is 1094 / 4.1 GHz = 266.8 GB5 per GHz
And A13 ST scaled score is 1336 / 2.67 GHz = 500.4 GB5 per GHz
And it means A13 is 1.88x faster per GHz than Intel. Which means +88%, nicely matching that Andrei's +83% SPEC2006 results. Just don't be scared to calculate this IPC advantage. Denial won't help x86 to perform any better. However I admit it hurts less ego hiding the fact x86 CPUs are so garbage.

Real SW at A13 won't show that +80% IPC advantage of course. That's exactly why IT industry does these tests. Probably they need to throw money and time out of the window by performing totally useless tests :D...:rolleyes:
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,483
14,434
136
And it means A13 is 1.88x faster per GHz than Intel. Which means +88%, nicely matching that Andrei's +83% SPEC2006 results. Just don't be scared to calculate this IPC advantage. Denial won't help x86 to perform any better. However I admit it hurts less ego hiding the fact x86 CPUs are so garbage.

Real SW at A13 won't show that +80% IPC advantage of course. That's exactly why IT industry does these tests. Probably they need to throw money and time out of the window by performing totally useless tests :D...:rolleyes:
Your posts are really twisted. I don't know how you can stand the fact that most of the forum members do not like you or your posts.

From my perspective, I do not like them, since you compare a phone CPU to a desktop. They have different purposes and are engineered for that, but you can't see it.

What drugs are you currently on ?
 
  • Like
Reactions: Tlh97 and lobz

NostaSeronx

Diamond Member
Sep 18, 2011
3,683
1,218
136
From my perspective, I do not like them, since you compare a phone CPU to a desktop. They have different purposes and are engineered for that, but you can't see it.
Intel will never use mobile-derivatives of Banias/Dothan/Yonah to replace an actual desktop-derived CPU like Tejas/Jayhawk. Completely yugely unfeasible!!!

GHz desktop will always be faster than IPC mobile. Always, always, always!!! Making such a comparison is completely absurd, what drugs are these people on?! Don't they realize NETBURST1999 @ 4 GHz is better than CORE2003 @ 1.8 GHZ?!!?!?!

I can 100% guarantee that nothing derived from this https://ark.intel.com/content/www/u...ssor-t1400-2m-cache-1-83-ghz-667-mhz-fsb.html

will ever beat something derived from this https://ark.intel.com/content/www/u...echnology-4-00-ghz-2m-cache-1066-mhz-fsb.html

You have to really question these people on drugs huh?
 
Last edited:

Richie Rich

Senior member
Jul 28, 2019
470
229
76
@NostaSeronx
A nice one. Some people are not able to recognize simple sarcasm however they are allowed to moderate. Jeez, how deep Anandtech can fall....


Moderator call outs are not allowed. If you have
an issue with moderation, your only option is to
create a thread in moderator discussions.


AT Mod Usandthem
 
Last edited by a moderator:

NostaSeronx

Diamond Member
Sep 18, 2011
3,683
1,218
136
A nice one.
Apple A14 in a NUC format with active cooling or phat passive cooling. Probably will curb stomp the current Mac Mini, iMac, Macbook Air...

==> Lead customer joint product developments with SoC and system companies – collaboration for enabling next generation memory systems
- w/ Intel, AMD: DDR4 (3.2Gbps) / DDR5 (6.4Gbps)
- w/ Qualcomm, HiSilicon, Apple: LPDDR4x (4.2Gpbs) / LPDDR5 (6.4Gbps)
- w/ Nvidia, AMD, Apple – 14Gbps GDDR6 (up to 18Gbps)

Make derivative 64-bit LPDDR5(System-side) and 32-bit GDDR6(GPU-side) solution. Apple could then basically claim the premium low-tier gaming console. Add an IMG/Apple RTU solution and get really far with that.
=> "We have publicly said that ray tracing enabled GPU IP is on our roadmap with delivery in late 2020/early 2021 timeframe and we are already well advanced in our conversations with leading SoC designers about access to our patented IP ray tracing architecture under a range of different business models."

post-Zen2 cpu performance and post-switch gpu performance. Apple could easily penetrate these markets and pretty much have a fully home-grown solution.
 
Last edited:

amrnuke

Golden Member
Apr 24, 2019
1,181
1,772
136
And it means A13 is 1.88x faster per GHz than Intel. Which means +88%, nicely matching that Andrei's +83% SPEC2006 results. Just don't be scared to calculate this IPC advantage. Denial won't help x86 to perform any better. However I admit it hurts less ego hiding the fact x86 CPUs are so garbage.

Real SW at A13 won't show that +80% IPC advantage of course. That's exactly why IT industry does these tests. Probably they need to throw money and time out of the window by performing totally useless tests :D...:rolleyes:
It isn't IPC, it's scaled benchmark results. It has almost no real-world applicability, just like the benchmarks that produce the result. Case in point: when was the last time someone completed a big project using single-thread-limited SPEC or Geekbench? Because those are the only performance wins the A13 has over current mainstream x86.

You'll have to ask Andrei why he would "throw money and time out of the window by performing totally useless tests" re: single-threaded scaled benchmark results (that are unfairly set up and esoteric to boot). I think you were being sarcastic. I'm not.

As my final message to you, what is it with your personal attacks on anyone who thinks x86 has a place in the computing world? I try to make a point for you that the A13 is stellar in scaled single-threaded results - and that's your response? "don't be scared", "denial won't help", "I admit it hurts less ego hiding the fact that x86 CPUs are so garbage"? With my post, I was trying to throw you a bone, and you decided to bite my hand. Bad dog. I won't make the mistake again.
 

DrMrLordX

Lifer
Apr 27, 2000
21,583
10,785
136
What's funny is that if you compare my Zen+ 2700 GB5 score on Linux to the Skylake CPU on MacOS, my 2700 has 8% higher perf/clock. Pretty sure that wouldn't hold up under different tests.

I dunno, the 8279u ain't all that. I'm starting to think you might be on to something though, despite alleged compiler normalization for GB5.
 

Hitman928

Diamond Member
Apr 15, 2012
5,182
7,633
136
I dunno, the 8279u ain't all that. I'm starting to think you might be on to something though, despite alleged compiler normalization for GB5.

How about my 2700 being 15.7% faster performance per clock than a 9900K on MacOS?

 

DrMrLordX

Lifer
Apr 27, 2000
21,583
10,785
136
How about my 2700 being 15.7% faster performance per clock than a 9900K on MacOS?


Wow, MacOS GB5 sucks. Also:


Frequencies were pretty high on that chip too. Looks like it had the standard PL1 configuration.

If Apple ever does move A-series ARM chips to MacOS, they may lose some performance right off the rip.