Solved! ARM Apple High-End CPU - Intel replacement

Page 52 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Richie Rich

Senior member
Jul 28, 2019
470
229
76
There is a first rumor about Intel replacement in Apple products:
  • ARM based high-end CPU
  • 8 cores, no SMT
  • IPC +30% over Cortex A77
  • desktop performance (Core i7/Ryzen R7) with much lower power consumption
  • introduction with new gen MacBook Air in mid 2020 (considering also MacBook PRO and iMac)
  • massive AI accelerator

Source Coreteks:
 
  • Like
Reactions: vspalanki
Solution
What an understatement :D And it looks like it doesn't want to die. Yet.


Yes, A13 is competitive against Intel chips but the emulation tax is about 2x. So given that A13 ~= Intel, for emulated x86 programs you'd get half the speed of an equivalent x86 machine. This is one of the reasons they haven't yet switched.

Another reason is that it would prevent the use of Windows on their machines, something some say is very important.

The level of ignorance in this thread would be shocking if it weren't depressing.
Let's state some basics:

(a) History. Apple has never let backward compatibility limit what they do. They are not Intel, they are not Windows. They don't sell perpetual compatibility as a feature. Christ, the big...

soresu

Platinum Member
Dec 19, 2014
2,667
1,865
136
Fine you can continue with your misplaced doubt about Apple's ARM chips until the new Macs are released in six months and you're forced to eat your words.
Linus Torvalds may be a crude loudmouth bully in his communication 'style', but he knows his stuff, and software wise he is far more responsible for the current success and stability of Linux than Bill Gates has ever been for Windows - I'd say that Linux is the better for his administration efforts.

If he says that the tests are worthless, I think it's safe to say that they are worthless.

Edit: Apparently he doesn't say they are worthless, but my point about his opinion stands regardless of the words he says - basically he's a disagreeable person, but likely his opinions are worth more than anyone on here, myself included.
 
Last edited:

Eug

Lifer
Mar 11, 2000
23,587
1,001
126
I mean . . . does Apple not have enough pull to get Tiger Lake-H instead?
Apple probably has already been manufacturing the Comet Lake iMacs for weeks now.

No need for Apple to get exclusive expensive early release chips from Intel if they’re going to update to ARM within the year anyway.
 

DrMrLordX

Lifer
Apr 27, 2000
21,640
10,858
136
Apple probably has already been manufacturing the Comet Lake iMacs for weeks now.

No need for Apple to get exclusive expensive early release chips from Intel if they’re going to update to ARM within the year anyway.

That or Intel couldn't deliver in an acceptable timeframe. I'm a little surprised they would take a 10c 14nm chip over their own silicon though. At least it'll be pretty thoroughly TDP-limited.
 

Richie Rich

Senior member
Jul 28, 2019
470
229
76
Maybe:

A14 (2+4): MacBook and MacBook Air
A14X (4+4): 14" MacBook Pro and 24" iMac
A14? (8+4): 16" MacBook Pro and 29" iMac
Don't you think that 2-big cores for A14 is obsolete on 5nm? Also to keep up with new ARM Cortex X1 cores (2xX1+2xA78 big cores) Apple needs to bring 4-big cores soon or later for iPhone. 5nm node offer pretty good opportunity in terms of transistor headroom.
  • A14 could have 4+4
  • A14X could have 8+4

If A14X will have bus to be able hook up 2x A14X (Zen1-like) then Mac Pro is possible too. This 16+8 would be probably faster than 24-core ThreadRippers and may challenge 32-core ones (Zen2).
 
  • Like
Reactions: ksec

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
Apple's transition to ARM is bound to flop just like Windows on ARM. Rosetta/Universal 2 don't make for solid developer foundations ...

What do end users get out of relying on a sluggish translation layer like Rosetta 2 when there's superior performance to be had on the native x86 equivalent devices ?

What do developers get out of targeting Universal 2 specifically over higher performance native x86 binaries ? History has shown us many times that having "intermediate bytecodes" and JIT compilation are failures as evidenced by strong desire for the industry to architecturally converge on hardware designs ...

On both accounts I can tell that the answer is that they get 'nothing' out of Apple's exercise because they fail to understand the basic software development model as evidenced by their obsessive focus on hardware rather than software.

Apple software practices are what I'd describe to be beyond 'dysfunctional' in this day and age. Does Apple seriously expect programmers to use garbage like Xcode ? Their App Store policies serves as abhorrent maintenance crap among developers. Final Cut Pro X is a dead end. WebKit is their other steaming pile of hot trash. MacOS has been an unmitigated disaster for years and recently iOS too as well ... (Catalina was especially terrible but I can only envision far worse nightmares with ARM and Big Sur)

Considering that Microsoft of all corporations with high-end software expertise couldn't do x86 emulation over ARM justice I highly doubt Apple will even come close to getting it right. I don't care how good Apple hardware is if their not going to spend a single dime on their broken software team ...
 
  • Like
Reactions: Tlh97 and Carfax83

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
Fine you can continue with your misplaced doubt about Apple's ARM chips until the new Macs are released in six months and you're forced to eat your words.

Calm down. I also haven't said that Apple won't performed just that it could be the case and good geekbench score in clang subtest doesn't say much.
 

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
Apple's transition to ARM is bound to flop just like Windows on ARM. Rosetta/Universal 2 don't make for solid developer foundations ...

What do end users get out of relying on a sluggish translation layer like Rosetta 2 when there's superior performance to be had on the native x86 equivalent devices ?

What do developers get out of targeting Universal 2 specifically over higher performance native x86 binaries ? History has shown us many times that having "intermediate bytecodes" and JIT compilation are failures as evidenced by strong desire for the industry to architecturally converge on hardware designs ...

On both accounts I can tell that the answer is that they get 'nothing' out of Apple's exercise because they fail to understand the basic software development model as evidenced by their obsessive focus on hardware rather than software.

Apple software practices are what I'd describe to be beyond 'dysfunctional' in this day and age. Does Apple seriously expect programmers to use garbage like Xcode ? Their App Store policies serves as abhorrent maintenance crap among developers. Final Cut Pro X is a dead end. WebKit is their other steaming pile of hot trash. MacOS has been an unmitigated disaster for years and recently iOS too as well ... (Catalina was especially terrible but I can only envision far worse nightmares with ARM and Big Sur)

Considering that Microsoft of all corporations with high-end software expertise couldn't do x86 emulation over ARM justice I highly doubt Apple will even come close to getting it right. I don't care how good Apple hardware is if their not going to spend a single dime on their broken software team ...
Now that's gonna stir up some heated conversation here with the ARMada 😂
 

Eug

Lifer
Mar 11, 2000
23,587
1,001
126
Don't you think that 2-big cores for A14 is obsolete on 5nm?
Is it? One might have said the same thing for 7 nm, no?

I personally don’t know since I know nothing about chip design, but my uneducated mind is not convinced that having to compete with a Cortex SoC, just because, is a compelling argument.
 
  • Like
Reactions: Tlh97

soresu

Platinum Member
Dec 19, 2014
2,667
1,865
136
Don't you think that 2-big cores for A14 is obsolete on 5nm? Also to keep up with new ARM Cortex X1 cores (2xX1+2xA78 big cores) Apple needs to bring 4-big cores soon or later for iPhone. 5nm node offer pretty good opportunity in terms of transistor headroom.
Why?

Apple is sparing with it's halo performance cores for a reason - they suck the battery dry when in full swing - this is the second edge to the blade of any super wide design, which is probably why they put such effort into the perf/watt development of their little cores which are still leaving ARM Cortex in the dust on that front due to zero consumer updates since A55.

Bare in mind that even with A77 Qualcomm have moved to having one 'prime' core at full whack frequency, with others at a few hundred mhz less and then the little cores.

If the SD 875 uses X1 at all, it will likely be that single prime core, with the other 3 being A78 - power consumption alone will demand this configuration, though perhaps an 8cx successor might use 2x X1 cores or more.

I really do hope that ARM put their best foot forward with the next gen little core to match with Matterhorn - considering that the more efficient background app execution mode of any mobile device depends on them, to say nothing of all the lesser gadgets like streaming sticks that use only little cores for their basic computation.

It should also be noted that core transistor count will likely not stay the same from generation to generation within a given width, this means that while perf will go up, so will power and area for each generation.

The few examples where this is not true like A73 and A78 have been for differing reasons - A73 was not a direct uArch successor to A72 coming from the Sophia design hub, quite possibly more of a descendant of the A17 at a more fundamental level, and A78 was likely Austin recognising areas where they could have made A77 more efficient, given more time to work on it.
 
Last edited:
  • Like
Reactions: Tlh97 and Carfax83

SarahKerrigan

Senior member
Oct 12, 2014
374
539
136
Linus Torvalds may be a crude loudmouth bully in his communication 'style', but he knows his stuff, and software wise he is far more responsible for the current success and stability of Linux than Bill Gates has ever been for Windows - I'd say that Linux is the better for his administration efforts.

If he says that the tests are worthless, I think it's safe to say that they are worthless.

Except he doesn't. On RWT 403.gcc is even occasionally called "Linusmark", including by Torvalds himself, and his primary criticism of it is that SPEC allows vendor tuning games in official submissions (a complaint that isn't really applicable to Anandtech runs, since they aren't doing the contortions that vendor submissions do.) He also cites 403.gcc himself to defend Intel microarchitectures in the past, saying "just look at the gcc numbers. You can't break gcc." He also refers to SPEC's gcc subtest as "the best measure of over-all integer performance", and says "the only interesting SPEC benchmark remains gcc.403."


"Torvalds hates 403.gcc!" appears to be a fever dream beginner99 had.
 
Last edited:

Hitman928

Diamond Member
Apr 15, 2012
5,324
8,009
136
Except he doesn't. On RWT 403.gcc is even occasionally called "Linusmark", including by Torvalds himself, and his primary criticism of it is that SPEC allows vendor tuning games in official submissions (a complaint that isn't really applicable to Anandtech runs, since they aren't doing the contortions that vendor submissions do.) He also cites 403.gcc himself to defend Intel microarchitectures in the past, saying "just look at the gcc numbers. You can't break gcc." He also refers to SPEC's gcc subtest as "the best measure of over-all integer performance", and says "the only interesting SPEC benchmark remains gcc.403."


"Torvalds hates 403.gcc!" appears to be a fever dream beginner99 had.

He's probably referring to this post:


Linus definitely isn't a fan of spec or geekbench and he's not shy about dishing out his criticism . I mean, right before your quoted sentence he says this:

That said, I still claim that 403.gcc is the best of a pretty sad bunch. By a big margin. And I do think both Intel and IBM play similar games, so the numbers are probably roughly comparable, even if they have almost no correlation to what you'd get on real applications rather than spec-tuned benchmarks.

I honestly don't understand the relevance of some of the points he's making in terms of GUI branching complexity, but I'm also not a software developer, maybe it makes more sense to others. I also don't understand why everyone is fighting so hard about this. It was somewhat interesting to talk about the possibilities when we didn't know for sure Apple would transition to their own CPUs, but now that it's official, why not just sit back and wait for the benches? I mean, it's not like we haven't discussed this issue to death both recently and over the last few years.
 
  • Like
Reactions: Tlh97 and Carfax83

SarahKerrigan

Senior member
Oct 12, 2014
374
539
136
He's probably referring to this post:


Linus definitely isn't a fan of spec or geekbench and he's not shy about dishing out his criticism . I mean, right before your quoted sentence he says this:



I honestly don't understand the relevance of some of the points he's making in terms of GUI branching complexity, but I'm also not a software developer, maybe it makes more sense to others. I also don't understand why everyone is fighting so hard about this. It was somewhat interesting to talk about the possibilities when we didn't know for sure Apple would transition to their own CPUs, but now that it's official, why not just sit back and wait for the benches? I mean, it's not like we haven't discussed this issue to death both recently and over the last few years.

He's enough of a fan of it to say that it's the single best test of over-all int perf. Yeah, it's not a great proxy for a lot of UI-intensive loads, which will indeed hit L1I harder (especially games), but that's actually the opposite of what beginner99 was saying - beginner99 was arguing that Apple's cores will be great for consumer loads but somehow atrocious for compiling. Which is a statement utterly without evidence.
 
  • Like
Reactions: Lodix and Doug S

blckgrffn

Diamond Member
May 1, 2003
9,128
3,069
136
www.teamjuchems.com
I don't think anyone disputes that. The open questions are:

1) will it be A14 based or come far enough in the future that it is A15 or possibly even A16 based?

2) will it be a monolithic 'big chip with lots of cores' like Intel or a 'chiplet' strategy like AMD?

3) if it is a chiplet strategy, will they design the chips that go into smaller Macs like the hypothetical 8+4 chip going into a Macbook Pro in such a way that they can function in that role as a standalone but also work as a chiplet with one or more others to create 16, 24 and 32 core variations?

If Apple plans to use their own GPU in the Mac Pro (something that we have no clue about, but have to recognize as a real possibility) there wouldn't be room for enough CPU cores and a big enough GPU on one die, though they could have two big dies - one with lots of CPU cores and one with lots of GPU units. All dies might have 32 cores but enable only 16 of them in a lower end model etc.

Pretty sure we said mostly the same thing using different words... close enough for me anyway :)

I missed your post/reply on the battery sizes. I'll stick to my assertion that they'll shrink the battery sizes as much as they can to keep the current ~10 hours of usage and maximize their margins because they are kings of making money on consumer electronics.

I feel like if they hit a little stall due to lack of technology improvements down the road on the node side or we start seeing PC laptops with ~15-20 battery life they'll magically find that larger Wh batteries grant their laptops longer battery life :) Saving the marketing mojo for when they need it and reaping profits in the meantime.

This also allows them wiggle room in the future - if they ever need more power for the CPU/GPU but have committed to ~15 hour battery life then they might need to increase battery sizes not to lose that, which is undesirable from a marketing standpoint as well.

The fun part is that we'll just have to wait and see if they do that or come out with solutions that have larger (similar to current sizes) batteries from the jump. Leaks should be coming before too long, I hope!
 
Last edited:

Hitman928

Diamond Member
Apr 15, 2012
5,324
8,009
136
He's enough of a fan of it to say that it's the single best test of over-all int perf. Yeah, it's not a great proxy for a lot of UI-intensive loads, which will indeed hit L1I harder (especially games), but that's actually the opposite of what beginner99 was saying - beginner99 was arguing that Apple's cores will be great for consumer loads but somehow atrocious for compiling. Which is a statement utterly without evidence.

I don't think he's saying it's the best test of overall-all int perf, but rather that within the spec suite, gcc compile is the best test of integer performance but that the other tests set the bar really low.

Personally I have less of a problem with Spec than I do GB. I don't consider GB to be a very useful benchmark, to be honest. Spec is an interesting starting point, but to me, not enough to draw any conclusions from. It will be interesting to see how things shake out once actual Apple systems come out and people can benchmark them freely. I'm not going to buy into the Apply ecosystem, but if Apple really is able to match or beat intel's and AMD's best (say up to their 8 - 12 core offerings) and do so at a much lower power, then maybe that will help pave the way for ARM to take a foot hold on desktop and do the same. Then I could run Linux on ARM and have top tier performance with minimal cooling and still have a quiet computer with a small footprint. Sign me up for that. I have doubts that this will happen, but I'm willing to wait and see how it plays out.
 

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
Apple's cores will be great for consumer loads but somehow atrocious for compiling. Which is a statement utterly without evidence.

No. I never said that. Don't put words in my mouth.

I said graviton2 sucks at compiling Linux (evidence see phoronix) while it actually fares well in spec and geekbench compile tests. My point was merely we will have to see if Apple cores suffer from the same issue as the geekbench scores gives us no hint on how it will fare on more complex compile loads. And I said this is relevant because macbooks are quiet popular in some developer circles. So if compiling suffers & not being x86 that could seriously impact usability of macbooks for said devlopers and hence impact sales. Maybe it's a non-isse (compile-time) but it is something that should be tested once the product is released.
 

blckgrffn

Diamond Member
May 1, 2003
9,128
3,069
136
www.teamjuchems.com
That or Intel couldn't deliver in an acceptable timeframe. I'm a little surprised they would take a 10c 14nm chip over their own silicon though. At least it'll be pretty thoroughly TDP-limited.

I think it's really surprising, given they had to change socket/chipset to do so?

Perhaps to meet their "2 year deadline" they felt compelled to move now and therefore not feel any pressure over the next two years in terms of chip availability for the 9th gen desktop parts pushing their internal product schedules.

Maybe, IDK. It seems like Apple would have been able to secure an agreement on availability and support if needed.
 

Doug S

Platinum Member
Feb 8, 2020
2,269
3,521
136
Linus Torvalds may be a crude loudmouth bully in his communication 'style', but he knows his stuff, and software wise he is far more responsible for the current success and stability of Linux than Bill Gates has ever been for Windows - I'd say that Linux is the better for his administration efforts.

If he says that the tests are worthless, I think it's safe to say that they are worthless.

He never said the tests are worthless, just that they aren't very applicable to real world results. If you pay attention to him, he says that about ALL benchmarks, and would laugh at people would try to claim that something like Cinemark is worth anything and would be the first to tell you that the SPEC or even Geekbench compiler results would be worth more if you wanted to try to gauge real world performance. To him, compiler benchmarks are the "least useless" of the bunch because they can't be gamed by compiler tricks or other cheats. Which is pretty much what I said above. His issue with Geekbench is more related to the short duration of the runs and smaller footprints, but that's understandable as it was designed to run on phones whereas SPEC was designed to test workstations.

If you read the threads on RWT that have been linked here, you'll see that I've discussed this with him on more than one occasion at that site and we pretty much agree on this. Sure compiling the whole Linux kernel would in some ways make for a better benchmark when measuring multithreaded results, but it wouldn't matter all that much for single thread - it takes a LOT longer but isn't going to have any larger of a memory footprint or produce much in the way of different results (other than maybe making a phone throttle if it gets too warm)

Compiling such a large amount of source code also takes it out of the domain of being primarily a "CPU" benchmark, because it would by necessity involve lots of file I/O so faster NAND, a more efficient kernel and so forth would have more effect over the results than you really want in a "CPU" benchmark. While that provides some useful information if you want to evaluate the whole system, if you are discussing x86 vs ARM it is noise you want to eliminate from the results.
 

Doug S

Platinum Member
Feb 8, 2020
2,269
3,521
136
Pretty sure we said mostly the same thing using different words... close enough for me anyway :)

I missed your post/reply on the battery sizes. I'll stick to my assertion that they'll shrink the battery sizes as much as they can to keep the current ~10 hours of usage and maximize their margins because they are kings of making money on consumer electronics.

Nah, now that Jony Ivy is no longer ruling Apple's designs with an iron fist they won't sacrifice battery to get another mm of thinness like they always did. Notice how they went back from that disaster of a keyboard after he left?

Just look at the iPhone 11 line, those have almost the best battery life of any smartphones out there, including ones with far larger batteries. If Apple wanted to be "decent but nowhere near top of the heap" like they were in smartphone battery life comparisons for many years they could have shaved a bit of thickness and weight off the iPhone 11. Ivy would have gone in a more svelte direction if he was still in charge.

I think they will have more battery life, but can't say for sure they won't shrink that battery somewhat as well. After all, there's a point where additional battery life in a laptop is pretty meaningless. If you can get 20 hours for instance, who is going to think it is worthwhile getting a 21st hour? That's sort of like how I view phones too, beyond "all day" battery life it is pointless because humans have to sleep so there is always a time to charge it.
 

defferoo

Member
Sep 28, 2015
47
45
91
Actually, I think they could actually get away with an A14 non-X for the MacBook and MacBook Air. They could play with clock speeds if necessary.

Remember, the A13 in the frickin' iPhone already scores higher than the Core i7-1060NG7 in the 2020 MacBook Air (which gets complaints because of fan noise).


Maybe:

A14 (2+4): MacBook and MacBook Air
A14X (4+4): 14" MacBook Pro and 24" iMac
A14? (8+4): 16" MacBook Pro and 29" iMac
i think Apple is going to be more ambitious with these chips. It would be sad if a MacBook Air with their own processors were weaker than a cheaper iPad Pro.

my guess is their notebook lineup might look like:
A14X (4+4): MacBook/MacBook Air
A14? (8+4): 13" MacBook Pro
A14? (12?+4+enhanced GPU): 16" MacBook Pro

I could also see them just using the 8+4 CPU config for the 16 inch but just add an enhanced GPU on top.
 
  • Like
Reactions: Eug

ksec

Senior member
Mar 5, 2010
420
117
116
I think the most interesting thing is Apple's new rumoured iMac CPU being ~95W.

Since it is obvious Apple wants to tout its own ARM silicon being better. Their ARM offering on Desktop may potentially be monster.

Hopefully there will be a Live Keynote in September.... ( Increasingly unlikely )...
 

Eug

Lifer
Mar 11, 2000
23,587
1,001
126
i think Apple is going to be more ambitious with these chips. It would be sad if a MacBook Air with their own processors were weaker than a cheaper iPad Pro.

my guess is their notebook lineup might look like:
A14X (4+4): MacBook/MacBook Air
A14? (8+4): 13" MacBook Pro
A14? (12?+4+enhanced GPU): 16" MacBook Pro

I could also see them just using the 8+4 CPU config for the 16 inch but just add an enhanced GPU on top.
It should be noted that the MacBook Air with 13" Retina screen, 8 GB RAM, and 256 GB storage is just $999.

The iPad Pro with 12.9" Retina screen, 6 GB RAM, 256 GB storage, and Magic Keyboard is $1448.

IOW, to get a similar setup with the iPad Pro, you have to spend 45% more than what it would cost for the MacBook Air. Even if you remove the Magic Keyboard, it's still priced higher than the MacBook Air, by $100.
 

IvanKaramazov

Member
Jun 29, 2020
56
102
66
I could also see them just using the 8+4 CPU config for the 16 inch but just add an enhanced GPU on top.

I also expect GPU will be the biggest point of differentiation between the MBP sizes. If the 8+4s are sufficiently performant, beyond maybe boosting the clocks a bit it might make sense for Apple to focus on adding GPU cores and more / faster unified memory in the larger MBP.

Of course, it's always possible that the larger MBP and larger iMac won't transition to Apple Silicon until the A15 generation in 2021. It might be telling that the leaked Comet Lake iMac chip is suitable for the 27" but no comparable benchmarks have surfaced for the 21", while all the rumors of a redesign have focused on the supposed 24" with no indication of an equivalent larger iMac.

EDIT - The larger models being another year / generation off also gives Apple an additional year to ramp up whatever is going on with their high-end GPU plans (AMD v Apple). The A14 series Apple Silicon chips will certainly have faster GPUs than the smaller models with Intel iGPUs, but it's a different story for the larger, dGPU models.
 
Last edited: