Solved! ARM Apple High-End CPU - Intel replacement

Page 26 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Richie Rich

Senior member
Jul 28, 2019
470
229
76
There is a first rumor about Intel replacement in Apple products:
  • ARM based high-end CPU
  • 8 cores, no SMT
  • IPC +30% over Cortex A77
  • desktop performance (Core i7/Ryzen R7) with much lower power consumption
  • introduction with new gen MacBook Air in mid 2020 (considering also MacBook PRO and iMac)
  • massive AI accelerator

Source Coreteks:
 
  • Like
Reactions: vspalanki
Solution
What an understatement :D And it looks like it doesn't want to die. Yet.


Yes, A13 is competitive against Intel chips but the emulation tax is about 2x. So given that A13 ~= Intel, for emulated x86 programs you'd get half the speed of an equivalent x86 machine. This is one of the reasons they haven't yet switched.

Another reason is that it would prevent the use of Windows on their machines, something some say is very important.

The level of ignorance in this thread would be shocking if it weren't depressing.
Let's state some basics:

(a) History. Apple has never let backward compatibility limit what they do. They are not Intel, they are not Windows. They don't sell perpetual compatibility as a feature. Christ, the big...

Richie Rich

Senior member
Jul 28, 2019
470
229
76
3.1 GHz for iPhone is unreal (A13 @3.1GHz would consume +60% more power at 7nm and +23% at 5nm)
But ST score looks somehow at higher end of possibility range because A13 score 1330 x 1.2 (IPC gain) x 2.75/2.65 GHz = 1656 pts.
Not sure if Apple can gain full +20% of IPC though. IMHO fan-fiction.
 

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
IMHO Apple can sell and support both x86 and ARM in parallel for as long as they need to.

You are again completely missing the economic factors. If apple drops out intel of their biggest volume products, eg lower end macbooks, then for intel losing apple as a customer suddenly is less of a problem because of the remaining much lower volume. The lower the volume apple buys the more the balance shifts and intel can start raising prices on them, especially for macpro xeon cpus as this mac just released and apple doesn't have any cpu with that many cores.

Also if apples starts switching to ARM, then sales of x86 apple products will decline because people now know they would be buying a soon to be obsolete product.
 

Richie Rich

Senior member
Jul 28, 2019
470
229
76
You are again completely missing the economic factors. If apple drops out intel of their biggest volume products, eg lower end macbooks, then for intel losing apple as a customer suddenly is less of a problem because of the remaining much lower volume. The lower the volume apple buys the more the balance shifts and intel can start raising prices on them, especially for macpro xeon cpus as this mac just released and apple doesn't have any cpu with that many cores.
Yeah, but if you continued reading the next sentence: "This will have an advantage for Apple as they can ask premium price for premium ARM performance and battery consumption."
Apple can sell with volume ratio 10 to 90 in favor of Intel therefore keeping ARM MacBooks as a high margin premium brand.
Obviously every year rising this volume while more and more SW will be compiled for ARM. Until transition is completed.
Intel cannot ask higher prices for such a crap they are delivering right now. Especially when AMD's waiting in front of Apple's door.
 
Last edited:

Doug S

Platinum Member
Feb 8, 2020
2,266
3,516
136
It is also possible that Apple A14 score is real, but is running at a different clock rate than what the real thing will actually end up with.

Apple doesn't bin like Intel, so they have to choose a clock rate that the overwhelming majority of chips will pass at, at a given power budget. They don't really know what the yield curve or clock/power curve look like until TSMC has finished all their process tuning and enters mass production in ~ June.

One A14 that runs at 3.1 GHz doesn't mean 95%+ of all non-defective A14 chips will pass at that speed in TSMC's final version of N5. So it isn't even about whether this leak is real. It could be real, but still not reflect what iPhone 12s contain this fall, because even Apple doesn't know where the final clock rate will end up yet.
 

Richie Rich

Senior member
Jul 28, 2019
470
229
76
SPECint2006:
  • A11 ... +23% over A10
  • A12 ... +16% over A11
  • A13 ... +11% over A12

Geekbench5.1:
  • A11 ... +19% over A10
  • A12 ... +13% over A11
  • A13 ... +14% over A12

... +7% looks too small IPC gain according to last IPC jumps.
 
Last edited:

Richie Rich

Senior member
Jul 28, 2019
470
229
76
There is not wall I think. IMHO It's more about transistor cost per IPC gain with available technology. They can reach some local maximum for a while like Intel did with Skylake (or how Musk pointed Space shuttle was). But while Intel is still running in circles around his local optimum and cannot climb over, Apple seems to found a way how to get another +80% of IPC. I wouldn't be surprised if Apple will keep >10% IPC gain for next 2 years. But let's see...
 

Richie Rich

Senior member
Jul 28, 2019
470
229
76
There is not wall I think. IMHO It's more about transistor cost per IPC gain with available technology. They can reach some local maximum for a while like Intel did with Skylake (or how Musk pointed Space shuttle was). But while Intel is still running in circles around his local optimum and cannot climb over, Apple seems to found a way how to get another +80% of IPC. I wouldn't be surprised if Apple will keep >10% IPC gain for next 2 years. But let's see...
 

Doug S

Platinum Member
Feb 8, 2020
2,266
3,516
136
SPECint2006:
  • A11 ... +23% over A10
  • A12 ... +16% over A11
  • A13 ... +11% over A12

Geekbench5.1:
  • A11 ... +19% over A10
  • A12 ... +13% over A11
  • A13 ... +14% over A12

... +7% looks too small IPC gain according to last IPC jumps.

Increasing your clock rate lowers your IPC: DRAM and cache don't magically reduce latencies by the same percentage as clock rate increases, so memory becomes "further away" (in terms of CPU cycles) reducing IPC. Apple hasn't had a clock rate increase similar to what the increase to 3.1 GHz would represent for several years, it is only natural that the IPC gain would be less if the clock rate gain is more. Plus it becomes harder and harder to further increase IPC the higher it gets.

Still I think we need to take these numbers with a big grain of salt, since like I said Apple cannot know today what they'll clock the A14 at, so we can't assume it will run at 3.1 GHz even if the leak is real. And we can't assume the IPC gain would be only 7% if the clock was increased by only 5% instead of the over 10% that 3.1 GHz would represent.
 
  • Like
Reactions: Tlh97 and Etain05

amrnuke

Golden Member
Apr 24, 2019
1,181
1,772
136
There is not wall I think. IMHO It's more about transistor cost per IPC gain with available technology. They can reach some local maximum for a while like Intel did with Skylake (or how Musk pointed Space shuttle was). But while Intel is still running in circles around his local optimum and cannot climb over, Apple seems to found a way how to get another +80% of IPC. I wouldn't be surprised if Apple will keep >10% IPC gain for next 2 years. But let's see...
I would. Apple has exploited nearly as much as they can, it appears, and the law of diminishing returns is taking over.

Still, if they apply their A13X design to a laptop chip, they have a very competitive piece. The fact that they've nearly maximized IPC means they can now focus on several other areas of development to make their products more compelling to their customers, such as ensuring ARM compatibility with all their popular creator-oriented applications.
 

eek2121

Platinum Member
Aug 2, 2005
2,930
4,026
136
Look, just stop. You're embarrassing yourself. Both of these are ridiculous claims and by insisting on them you're deciding you want to play with the kids interested in insulting each other, rather than with the adults interested in how this technology actually works.

How about you stop with the personal insults. If you knew my background you’d be eating your boot.
 

eek2121

Platinum Member
Aug 2, 2005
2,930
4,026
136
Development work? WTF do you think the LLVM sub-benchmark of GB does?

Time to put this argument to bed.

First, regarding the Clang benchmark, do you really honestly think Geekbench has a Clang compiler hidden in their benchmark? No they do not. They would have never made it to the app store if they had. It’s an approximation of a workload. However, lets play that game.

Did you happen to notice the multi-threaded performance for those benchmarks? Development workloads are multithreaded NOT single threaded. Do you know what the largest percentage of Macbooks are used for? Development!

Let’s take a look at the A13 for a moment:

2x2.6 GHz big cores @ 5-6 watts of power. Multiply that by 4 to get an 8 core 8 thread - 20-24 watts Wait! There isn’t any hyperthreading, no DDR4, No PCIE, Actually, come to think of it, that smartphone SoC is missing every single major feature that modern machines have. Before you know it, Apple has blown past a 45 watt TDP.

ARM CPUs, including those from Apple look very attractive until you get into the nitty gritty of it. To scale up any ARM CPU just means you’ll end up with a similar perf/watt to an x86 CPU. AMD and Intel aren’t sandbagging, they have to deal with the laws of physics just like Apple/Intel do.

That isn’t to say that Apple and other CPU/SoC manufacturers aren’t doing a bang up job, but if you think they will somehow have a monstrous price/performance/power advantage over x86, I have a bridge to sell you.
 

Doug S

Platinum Member
Feb 8, 2020
2,266
3,516
136
Another possibility is that this is a processor designed to go into Macs, and Geekbench is being fooled as to what it is running on. Pretty sure that if Apple does ARM Macs they will have at least some support for running phone/tablet apps (if nothing else to aid developers) so this could be operating in a device that has a larger power and cooling budget than the iPhone.

Obviously if you have support for running phone/tablet apps in a Mac you need to provide a way for it to "lie" to the apps as far as what hardware it is being run on, what the screen size is, etc. so that you could simulate a variety of phones...
 

Nothingness

Platinum Member
Jul 3, 2013
2,420
751
136
First, regarding the Clang benchmark, do you really honestly think Geekbench has a Clang compiler hidden in their benchmark? No they do not. They would have never made it to the app store if they had. It’s an approximation of a workload. However, lets play that game.
You're wrong. Geekbench contains a significant part of clang. Please read the GB5 Workloads document.

Did you happen to notice the multi-threaded performance for those benchmarks? Development workloads are multithreaded NOT single threaded. Do you know what the largest percentage of Macbooks are used for? Development!
If by that you mean compilers are multithreaded that's quite often not the case. Or perhaps you think that parallel compilation of multiple files is multithreading? It's not.

And sorry but hyperthreading doesn't bring a lot if you have a fast SSD. Here is an example on my 4 cores 8 threads CPU:
Code:
time make -j4
real    0m54.395s
user    3m4.847s
sys    0m17.991s

time make -j8
real    0m46.903s
user    3m59.115s
sys    0m22.643s
Real time is decreased by ~15%, that's less than what a single extra core would bring. Still good to take, but not massive.

That isn’t to say that Apple and other CPU/SoC manufacturers aren’t doing a bang up job, but if you think they will somehow have a monstrous price/performance/power advantage over x86, I have a bridge to sell you.
I partly agree in the sense that the advantage won't be monstrous but at the moment it seems there's a small power/cost advantage for the ARM camp if we believe AWS Graviton2 review for instance. I have little doubt AMD and Intel will fight back.
 
  • Like
Reactions: Gideon and Etain05

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
Raw perf probably does matter to server/datacenter buyers - but likely not nearly as much as perf/watt - cost of running both the chips AND the cooling is a big consideration over and above the performance itself.
So then Intel should make less than zero sells :eek:
 

soresu

Platinum Member
Dec 19, 2014
2,662
1,862
136
So then Intel should make less than zero sells :eek:
A remark which I hope for you was meant as a joke :)
If Intel's process problems persist the only joke will be on them, the recent supercomputer wins both going to AMD is a sign that things have shifted - if Intel don't scramble back to their former position those 2 may be the first of many.
 
  • Like
Reactions: Tlh97

Nothingness

Platinum Member
Jul 3, 2013
2,420
751
136
If Intel's process problems persist the only joke will be on them, the recent supercomputer wins both going to AMD is a sign that things have shifted - if Intel don't scramble back to their former position those 2 may be the first of many.
I meant to say that IT won't turn away from Intel despite the Intel technical problems you describe. This is obvious and why I find lobz comment silly and funny.

But as you say if these problems persist Intel will lose some market share. But I doubt Intel is standing still and given IT intertia they might have time to counter the attacks from AMD and ARM.
 

soresu

Platinum Member
Dec 19, 2014
2,662
1,862
136
I meant to say that IT won't turn away from Intel despite the Intel technical problems you describe. This is obvious and why I find lobz comment silly and funny.

But as you say if these problems persist Intel will lose some market share. But I doubt Intel is standing still and given IT intertia they might have time to counter the attacks from AMD and ARM.
Agreed - if anything the Coronavirus shutdowns/delays may be exactly what the doctor ordered for Intel.
 

DrMrLordX

Lifer
Apr 27, 2000
21,634
10,850
136
There isn’t any hyperthreading, no DDR4, No PCIE, Actually, come to think of it, that smartphone SoC is missing every single major feature that modern machines have. Before you know it, Apple has blown past a 45 watt TDP.

As I've mentioned in another thread . . . the ARM server world is moving forward without Apple.


So, what is the Graviton2? It’s a 64-core monolithic server chip design, using Arm’s new Neoverse N1 cores (Microarchitectural derivatives of the mobile Cortex-A76 cores) as well as Arm’s CMN-600 mesh interconnect. It’s a pretty straightforward design that is essentially almost identical to Arm’s 64-core reference N1 platform that the company had presented back a year ago. Amazon did diverge a little bit, for example the Graviton2’s CPU cores are clocked in at a bit lower 2.5GHz as well as including only 32MB instead of 64MB of L3 cache into the mesh interconnect. The system is backed by 8-channel DDR-3200 memory controllers, and the SoC supports 64 PCIe4 lanes for I/O. It’s a relatively textbook design implementation of the N1 platform, manufactured on TSMC’s 7nm process node.

Amazon is pretty serious about I/O with that chip. You can judge for yourself whether or not Amazon has reached parity with Intel or AMD. At this point, I'd still put my money on Rome, but obviously they can add I/O and interconnect and keep power usage to sane levels.To whit:

The N1 cores remain very lean and efficient, at a projected ~1.4mm² for a 1MB L2 cache implementation such as on the Graviton2, and sporting excellent power efficiency at around ~1W per core at the 2.5GHz frequency at which Amazon’s new chip arrives at.

Moving right along:

That isn’t to say that Apple and other CPU/SoC manufacturers aren’t doing a bang up job, but if you think they will somehow have a monstrous price/performance/power advantage over x86, I have a bridge to sell you.

I can agree with that. ARM has certainly improved; the question is, how much further can they go, and will they surpass x86?
 
Last edited: