• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Discussion Quo vadis Apple Macs - Intel, AMD and/or ARM CPUs? ARM it is!

Page 10 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

awesomedeluxe

Member
Feb 12, 2020
69
23
41
The thing is the ultrabook segment where ARM chips would be well-positioned is actually getting good parts from Intel right now. Tiger Lake will probably show up in a lot of attractive machines this year. And AMD is starting to close in on this area, too.

The ipad-notebook still seems like the best fit for these chips. I feel like "iPadOS" exists for this reason.

But it's hard to see the upside of making this transition right now. It would have been a great move in 2017.
 

shiznit

Senior member
Nov 16, 2004
422
13
81
Apple sells ~20M Macs per year but decreasing YoY. The upside would be in inreasing margins and sales. Quick Google search came up with a Core i7-1065G7 price of $426 before Apple discount. How much does a in-house chip allow them to cut cost and reach a broader market? Most of the R&D cost would already be covered by the mobile SoCs. I think a $600-700 Macbook "SE" would do very well.

That said, I can't wait to get my hands on a Tiger Lake Macbook Air. That CPU is looking really good for the high end ultra-portables.
 
Last edited:

coercitiv

Diamond Member
Jan 24, 2014
4,546
6,258
136
In the mean time, leveraging their ability to exercise full control over their hardware and operating system, will yield a bigger bang for the given raw performance that we'd see in the x86 world.
But they're not doing it to extract more performance, they're doing it to extract higher margins.
 

shiznit

Senior member
Nov 16, 2004
422
13
81
But they're not doing it to extract more performance, they're doing it to extract higher margins.
I would think that providing better than gaming console-level integration will eventually lead to more performance. Apple can afford to dedicate silicon to its own use cases in a way that Intel can't.
 

Thala

Golden Member
Nov 12, 2014
1,266
578
136
Let me melt people's minds even more. Straight from the horses mouth, which is Mark Gurman who brought the news about ARM Macs.

"The transition to in-house Apple processor designs would likely begin with a new laptop because the company’s first custom Mac chips won’t be able to rival the performance Intel provides for high-end MacBook Pros, iMacs and the Mac Pro desktop computer. "
https://www.bloomberg.com/news/articles/2020-04-23/apple-aims-to-sell-macs-with-its-own-chips-starting-in-2021 11th paragraph of their analysis.

So basically, according to even Apple themselves, those ARM cores are not fast enough to compete with Intel x86 CPUs.
Lol, what are you talking about? That comment is pure speculation from Bloomberg and has no relation to any Apple source.
 
  • Like
Reactions: Etain05

Thala

Golden Member
Nov 12, 2014
1,266
578
136
iPad Pro Apps use HARDWARE DECODERS in the SoC's. Not the CPU cores, themselves.

Apple genuinely has "Reality Distortion Field".

P.S. That rumored 8/4 Core Big.LITTLE design is going to be exactly the same under 7W's of power.

Why? Its going to be fit in PASSIVELY cooled 12 inch MacBook.
Using HW decoder to run CPU benchmarks like SPEC? Wake up man!
 

Glo.

Diamond Member
Apr 25, 2015
4,835
3,457
136
Using HW decoder to run CPU benchmarks like SPEC? Wake up man!
Again, is SPEC used for Blender, Final Cut Pro X, Logic Pro X, Photoshop, DaVinci Resolve?

If not, come to me again when they will and we can compare actual performance of the cores.


Especially in long running, multithreaded workloads.
 
  • Like
Reactions: Tlh97

coercitiv

Diamond Member
Jan 24, 2014
4,546
6,258
136
I would think that providing better than gaming console-level integration will eventually lead to more performance. Apple can afford dedicate silicon to it's own use cases in a way that Intel can't.
Sure, in theory, but let's not forget what this integration is aimed for: computing for the masses, the likes of Macbook Air and iMac. The iMac was already used as prime example in this thread: the current entry level model uses a dual-core with a 5400RPM spinner. It's a pathetic display of performance for 2020, but that's what Apple and others sell today for the average Joe.

And last but not least, let's wait and see what OS this new breed of devices will be running.
 

Doug S

Senior member
Feb 8, 2020
786
1,137
96
Let me melt people's minds even more. Straight from the horses mouth, which is Mark Gurman who brought the news about ARM Macs.

"The transition to in-house Apple processor designs would likely begin with a new laptop because the company’s first custom Mac chips won’t be able to rival the performance Intel provides for high-end MacBook Pros, iMacs and the Mac Pro desktop computer. "
https://www.bloomberg.com/news/articles/2020-04-23/apple-aims-to-sell-macs-with-its-own-chips-starting-in-2021 11th paragraph of their analysis.

So basically, according to even Apple themselves, those ARM cores are not fast enough to compete with Intel x86 CPUs.

All that proves is that Mark Gurman is wrong. We have benchmarks for both iPhones/iPads and Macs, we know the Apple SoCs are faster. Why are you in denial of that when these benchmarks have been referenced often in these forums and in some cases were even run by Anandtech writers themselves, and instead believe a random journalist who gives no reason for his supposition, I don't know. Show some evidence of your claim beyond "because some writer jumped to that conclusion without evidence".
 

Glo.

Diamond Member
Apr 25, 2015
4,835
3,457
136
All that proves is that Mark Gurman is wrong. We have benchmarks for both iPhones/iPads and Macs, we know the Apple SoCs are faster. Why are you in denial of that when these benchmarks have been referenced often in these forums and in some cases were even run by Anandtech writers themselves, and instead believe a random journalist who gives no reason for his supposition, I don't know. Show some evidence of your claim beyond "because some writer jumped to that conclusion without evidence".
Haha. So the guy, who brings you the topic of Apple releasing ARM based computer and a laptop is suddenly also wrong on the topic of Apple being required to use more cores to compete with Intel, because Apple designs are not powerful enough.

If that is the case, that Gurman is wrong, he is Also wrong on the topic of Apple switching from Intel to their own ARM chips ;).

Jesus. This is getting ridiculous.
 
  • Like
Reactions: Tlh97 and lobz

shiznit

Senior member
Nov 16, 2004
422
13
81
Sure, in theory, but let's not forget what this integration is aimed for: computing for the masses, the likes of Macbook Air and iMac. The iMac was already used as prime example in this thread: the current entry level model uses a dual-core with a 5400RPM spinner. It's a pathetic display of performance for 2020, but that's what Apple and others sell today for the average Joe.

And last but not least, let's wait and see what OS this new breed of devices will be running.
Fair point but it doesn't necessarily apply across the board. This is the same company that ordered a 128 MB L4 die for a 13" laptop and put 24 MB of L2/L3 on a phone. I'm excited to see what they can do with a 200-300 mm2 silicon budget and a >20W power budget.
 
  • Like
Reactions: Tlh97 and coercitiv

DrMrLordX

Lifer
Apr 27, 2000
17,842
6,816
136
We have benchmarks for both iPhones/iPads and Macs, we know the Apple SoCs are faster.
Are there head-to-head iPad vs Mac benchmarks running a large test suite of identical applications? I've only seen some SPEC results and Geekbench numbers.
 

Ajay

Diamond Member
Jan 8, 2001
9,482
3,950
136
But they're not doing it to extract more performance, they're doing it to extract higher margins.
Both. They are getting more performance per mm^2 and leveraging that, along with their software infrastructure, to get better margins. IMO.

*edit: well, and the rest of the hardware stack.
 
Last edited:

Thala

Golden Member
Nov 12, 2014
1,266
578
136
Are there head-to-head iPad vs Mac benchmarks running a large test suite of identical applications? I've only seen some SPEC results and Geekbench numbers.
SPEC itself is a collection of benchmarks covering a variety of desktop computational kernels. Same for Geekbench. What do you need more benchmarks for? What is your expectation if you had more benchmarks? Do you expect that x86 is magically pulling ahead if you search long enough for the right benchmark?
 

amrnuke

Golden Member
Apr 24, 2019
1,165
1,730
106
SPEC itself is a collection of benchmarks covering a variety of desktop computational kernels. Same for Geekbench. What do you need more benchmarks for? What is your expectation if you had more benchmarks? Do you expect that x86 is magically pulling ahead if you search long enough for the right benchmark?
I think it's reasonable to be suspicious. Linus Torvalds explains:

Oh, I definitely believe that llvm is the best part of GB4, the same way gcc is the best part of spec.

But llvm or gcc is not even close to complex UI loads. If you think the llvm I$ miss rates or branch prediction numbers look bad, I have a bridge to sell you. Those are still really good low miss rates. They look high only because you compare to some silly trivial benchmark that just has one single loop.

Modern GUI toolkits really do nasty things to a CPU in a way that even a "complex" load like a compiler written in C++ doesn't even come close to.

A compiler may have some OS abstractions, and various abstractions for optimization passes and particular optimizations (and for hw descriptions etc), but on the whole it's fairly core system code that doesn't do anything actively odd.

A GUI app ends up having abstraction upon abstraction (often through several layers of toolkit libraries) and event loops with signals going hither and thither and moving a mouse pointer or touching the screen can cause millions of instructions to be executed with nary a loop in sight because you just have those things calling each others or causing other events to be created and then you have more abstractions to actually paint and update the end result...

I$ misses do matter, but most simple benchmarks don't even begin to scratch the surface.
So SPEC and GB don't reflect real world applications nearly as well as one would expect, given that they are composed of several different benchmarks. I think one can easily see this by comparing SPECint and SPECfp and GB5 results to benchmark results for browsers, office apps, VMware, 7zip combined, encryption, etc.

I would put SPEC and GB5 on the same level as any individual app test. If you know your workload correlates well with SPEC or GB scores, then use it. But using it to make sweeping statements about any chip is not just wrong, it's ignorant.
 

DrMrLordX

Lifer
Apr 27, 2000
17,842
6,816
136
What do you need more benchmarks for?
If AT ran a benchmark on a new CPU release from AMD or Intel and all they did was run SPEC and Geekbench5, AT would become a laughingstock.

Here's AT's 3900x review, starting with the first benchmark page (ironically, SPEC)


Pay careful attention to how well Intel does with the 9900k in 2017 rate, and then how the 9900k gets whipped in most other MT benchmarks. SPEC alone tells you very little.
 

Carfax83

Diamond Member
Nov 1, 2010
6,068
870
126
I think it's reasonable to be suspicious. Linus Torvalds explains:
Could you do me a favor and link to where he says it? When I click on expand quote, the web page scrolls up to the top of the page. It must be a bug with the website or something, but I have tried it on several browsers and they all do the same thing.

Extremely annoying because you can't read expandable quotes! :mad:
 
  • Like
Reactions: Tlh97 and geegee83

Carfax83

Diamond Member
Nov 1, 2010
6,068
870
126
Pay careful attention to how well Intel does with the 9900k in 2017 rate, and then how the 9900k gets whipped in most other MT benchmarks. SPEC alone tells you very little.
What I don't get is, why bother even using Spec2006? Isn't that outdated? It's well over a decade old which is an eternity in tech years.
 
  • Like
Reactions: Tlh97

Carfax83

Diamond Member
Nov 1, 2010
6,068
870
126
I doubt that anyone's updating the tests. They have historical significance for some (I guess).
Usually after a long time, most people retire benchmarks because they stop stressing the hardware. I've always wondered why Anandtech uses Spec2006 for smartphone reviews, yet doesn't use Spec2017.

I'm sure it's more complicated than what I am saying, because I know that with spec you can use your own compilers.
 
  • Like
Reactions: Tlh97

Thala

Golden Member
Nov 12, 2014
1,266
578
136
If AT ran a benchmark on a new CPU release from AMD or Intel and all they did was run SPEC and Geekbench5, AT would become a laughingstock.

Here's AT's 3900x review, starting with the first benchmark page (ironically, SPEC)


Pay careful attention to how well Intel does with the 9900k in 2017 rate, and then how the 9900k gets whipped in most other MT benchmarks. SPEC alone tells you very little.
Pay careful attention how all the SPEC tests in the review are single threaded benchmarks! So what is your argument again?
 
  • Like
Reactions: Etain05

DrMrLordX

Lifer
Apr 27, 2000
17,842
6,816
136
Pay careful attention how all the SPEC tests in the review are single threaded benchmarks! So what is your argument again?
Um what? From the article:

Moving on to the 2017 suite, we have to clarify that we’re using the Rate benchmark variations. The 2017 suite’s speed and rate benchmarks differ from each other in terms of workloads. The speed tests were designed for single-threaded testing and have large memory demands of up to 11GB, while the rate tests were meant for multi-process tests.
Am I missing something here?
 
  • Like
Reactions: Tlh97

naukkis

Senior member
Jun 5, 2002
462
316
136
iPad Pro Apps use HARDWARE DECODERS in the SoC's. Not the CPU cores, themselves.

Apple genuinely has "Reality Distortion Field".

P.S. That rumored 8/4 Core Big.LITTLE design is going to be exactly the same under 7W's of power.

Why? Its going to be fit in PASSIVELY cooled 12 inch MacBook.
Apple has great hardware decoders to everything :D

Show the test where Ipad underperforms, Apple itself have said that Ipad Pro is faster than 95% of laptops sold now.

And that rumored 8/4 core chip is two generation newer CPU design with new improved 5nm manufacturing process. Except phone and Ipad Soc's to be few tens of percent faster and more energy efficient than those sold now. That 8/4 version is still question, as you itself find limiting such a monster chip to 7w of power won't do favor to it - that 8/4 cpu design is for desktop/high end laptop only with something like 25-45w power envelope.
 
  • Like
Reactions: Etain05

Hitman928

Diamond Member
Apr 15, 2012
3,654
4,102
136
Usually after a long time, most people retire benchmarks because they stop stressing the hardware. I've always wondered why Anandtech uses Spec2006 for smartphone reviews, yet doesn't use Spec2017.

I'm sure it's more complicated than what I am saying, because I know that with spec you can use your own compilers.
Anandtech uses Spec2006 for mobile CPU tests because they have to rebuild all the tests to run on Android/Apple if the test doesn't already exist for them and they've already done this for Spec2006 but not for Spec2017. I'm sure rebuilding them for Apple is the bigger hurdle.
 

ASK THE COMMUNITY