Solved! ARM Apple High-End CPU - Intel replacement

Page 10 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Richie Rich

Senior member
Jul 28, 2019
470
229
76
There is a first rumor about Intel replacement in Apple products:
  • ARM based high-end CPU
  • 8 cores, no SMT
  • IPC +30% over Cortex A77
  • desktop performance (Core i7/Ryzen R7) with much lower power consumption
  • introduction with new gen MacBook Air in mid 2020 (considering also MacBook PRO and iMac)
  • massive AI accelerator

Source Coreteks:
 
  • Like
Reactions: vspalanki
Solution
What an understatement :D And it looks like it doesn't want to die. Yet.


Yes, A13 is competitive against Intel chips but the emulation tax is about 2x. So given that A13 ~= Intel, for emulated x86 programs you'd get half the speed of an equivalent x86 machine. This is one of the reasons they haven't yet switched.

Another reason is that it would prevent the use of Windows on their machines, something some say is very important.

The level of ignorance in this thread would be shocking if it weren't depressing.
Let's state some basics:

(a) History. Apple has never let backward compatibility limit what they do. They are not Intel, they are not Windows. They don't sell perpetual compatibility as a feature. Christ, the big...

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Forget about AVX512 in this context - thats really old school tech compared to SVE. And while you get similar speedups compared to SVE when hand optimizing the code, SVE has much better supports for compilers to auto-generate good code. In addition SVE code is agnostic to vector length so it runs essentially on any SVE implementation.

So the most successful CPU manufacturer in the World, Intel, is running old school tech? Come on man. This sentiment that Apple and ARM are geniuses and Intel and AMD are buffoons can only go so far.

As a non industry professional, it always makes me wonder why x86 (and Intel by virtue of association) gets criticized so much on Anandtech forums, yet has managed to beat all comers so far from both a price and performance standpoint over decades.
 

scannall

Golden Member
Jan 1, 2012
1,946
1,638
136
So the most successful CPU manufacturer in the World, Intel, is running old school tech? Come on man. This sentiment that Apple and ARM are geniuses and Intel and AMD are buffoons can only go so far.

As a non industry professional, it always makes me wonder why x86 (and Intel by virtue of association) gets criticized so much on Anandtech forums, yet has managed to beat all comers so far from both a price and performance standpoint over decades.
Has nothing to do with x86 being good, it isn't. But it's what the world is locked into. Apple's ARM chips outperform Intel clock for clock by roughly 30%. And do it in a phone.

The PC world is still locked into choices IBM made 40 years ago.
 
  • Like
Reactions: wintercharm

Thala

Golden Member
Nov 12, 2014
1,355
653
136
Soooo, what you are basically saying is that I was right in the first place about being able to have cores with differing vector execution strengths in the same SoC, because that is all that I meant.

Actually it is the other way around. The register are the interface to the programmer and they are described in the technical reference manuals the ISA and ABI documentation. So when we refer to vector-size or number of lanes - we are always talking about the architectural registers. The actual implementation of the ALUs is a microarchitectural detail.
As example when you refer to advanced SIMD (aka NEON) having 128bit vector lenght - you always mean the register sizes not the ALUs. Each NEON implementation has 128bit wide registers...ALUs can be much smaller.
And when we say, that SVE is vector-length agnostic....we are always referring to registers and not ALUs.
And when we say SVE can be up to 2048 bits wide, we again refer to the registers only...never to width of ALUs.

Coming back to the discussion, you need for both big and little cores the same vector-size SVE implementation (say 256 bit for example), while the implementation details can and will differ (like the number and width of ALUs). Then the architectural state is the same and you can freely migrate your contexts.

I hope its more clear by now.
 
Last edited:

Thala

Golden Member
Nov 12, 2014
1,355
653
136
So the most successful CPU manufacturer in the World, Intel, is running old school tech? Come on man. This sentiment that Apple and ARM are geniuses and Intel and AMD are buffoons can only go so far.

Why are you even asking if you always questiong the answers? As Scanall said, the PC world is pretty much locked.
It does not help much if you introduce a better architecture - but this new architecture has to emulate all the legacy programs and loosing easily factor 2-3 of performance while doing so?
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Has nothing to do with x86 being good, it isn't. But it's what the world is locked into. Apple's ARM chips outperform Intel clock for clock by roughly 30%. And do it in a phone.

The PC world is still locked into choices IBM made 40 years ago.

We've been having that discussion for several pages already. No one can deny that Apple's high performance ARM cores are impressive, but you have to look at it in context. Apple can design ultra wide high IPC low clock speed CPUs because it makes sense to do so given their performance and power usage priorities. Such a design however would likely fail in a desktop or laptop setting for several reasons already mentioned. For a CPU based on the A13/A14 core to make it on desktop or laptop would require many microarchitectural changes, which would invariably affect the IPC. Even Thala said as much.

As I've been saying, AMD and Intel aren't idiots. There's nothing stopping them from making a similar design over the years, but the question is, would it have been successful across the multiple workloads that both Intel and AMD require from their cores, ie servers, HPC, destkop, laptops, workstations etcetera, because both AMD and Intel design their architectures to be flexible in different environments.
 
Last edited:
  • Like
Reactions: pcp7

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Why are you even asking if you always questiong the answers? As Scanall said, the PC world is pretty much locked.

Well I'm not saying you guys are flat out wrong. I honestly don't possess the technical knowledge to really debate you on this subject, but I wish I did ;). But I just find it odd that what I'm being told contradicts what has occurred, and is occurring in the World.

And I'm not just talking about PC land. x86 has historically dominated or still dominates in servers, HPC etc where Windows isn't a thing. If x86 is so bad, how did it beat out so many other architectures in those other market sectors?

That's what I'm getting at. Actually, Intel's greatest threat in the server market is AMD, another x86 manufacturer.

It does not help much if you introduce a better architecture - but this new architecture has to emulate all the legacy programs and loosing easily factor 2-3 of performance while doing so?

Yeah I get that, I really do. Unless there's an absolutely massive performance upgrade, there's no incentive for many to abandon x86. But as I mentioned above, x86 has dominated, and continues to dominate not just the PC space. Heck, just a few months ago AMD announced it's collaborating with Cray to design what will be the World's fastest supercomputer by a hefty margin in 2021.
 

Nothingness

Platinum Member
Jul 3, 2013
2,405
735
136
Well I'm not saying you guys are flat out wrong. I honestly don't possess the technical knowledge to really debate you on this subject, but I wish I did ;). But I just find it odd that what I'm being told contradicts what has occurred, and is occurring in the World.

And I'm not just talking about PC land. x86 has historically dominated or still dominates in servers, HPC etc where Windows isn't a thing. If x86 is so bad, how did it beat out so many other architectures in those other market sectors?
Price, ease of access (everyone had a x86 on his desk, so going up to a server that runs x86 was easier), Intel and AMD making really good chips. That doesn't mean x86 is any good, it isn't, it just means engineers could design great chips and price them low enough.

As far as AVX2, AVX-512, etc. go, it's pretty obvious to me that they are worse than SVE. What Intel is doing here might be just a way to segment the market. Need larger vectors? Pick this new chip and write code that won't run on previous generation chips. They even fuse off AVX on all of their Celeron, Pentium chips based on Core architecture. So having a scalable vector engine would not be good for them from a commercial point of view.

That's what I'm getting at. Actually, Intel's greatest threat in the server market is AMD, another x86 manufacturer.
Exactly my thought too. AMD getting back strong has/will certainly slow down ARM penetration in the server market.

Yeah I get that, I really do. Unless there's an absolutely massive performance upgrade, there's no incentive for many to abandon x86. But as I mentioned above, x86 has dominated, and continues to dominate not just the PC space. Heck, just a few months ago AMD announced it's collaborating with Cray to design what will be the World's fastest supercomputer by a hefty margin in 2021.
And its performance will also come from GPU.
 

DrMrLordX

Lifer
Apr 27, 2000
21,620
10,829
136
No, you dont have to do this. You just write vector length agnostic SVE code and thats it ... not 3 loop variants. My point was that whatever agnostic code you write, the OS is not allowed to migrate that code dynamically between different vector length implementations.

Hmm. Okay, I think I see what you are saying here. The obvious solution is to make sure that all cores in a heterogeneous SoC have the same register size. That may not always be workable from a design perspective, especially not for mobile and/or embedded SoCs.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
Yeah I get that, I really do. Unless there's an absolutely massive performance upgrade, there's no incentive for many to abandon x86. But as I mentioned above, x86 has dominated, and continues to dominate not just the PC space. Heck, just a few months ago AMD announced it's collaborating with Cray to design what will be the World's fastest supercomputer by a hefty margin in 2021.

Just lets not pretend that this a good situation for the consumer. Even a massive performance gain of 50% - this is the equivalent of the past 10 years of Intel cores or so - is not enough to offset the emulation penalty. But since we are in Apple thread, they can manage to do the transition and their users will eventually having the net perfomance gain once most applications are re-compiled for ARM64. It is unfortunately not as easy in the PC space.
 
  • Like
Reactions: Carfax83

Thunder 57

Platinum Member
Aug 19, 2007
2,674
3,796
136
Has nothing to do with x86 being good, it isn't...

I have to disagree. Show me any other industry that is dominated by something that is "not good". If you want to argue that there are better alternatives to x86 out there, that's one thing. But to claim that it isn't any good is disingenuous at best.
 
  • Like
Reactions: Carfax83

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
That doesn't mean x86 is any good, it isn't, it just means engineers could design great chips and price them low enough.

What about the whole "ISA doesn't matter anymore" thing? That's what I keep seeing now whenever I browse the internet on this subject. Whatever x86 started as, it's now become something completely different.

Heck, even Intel tried to kill off x86 before, with Itanium and look how that turned out? It was in Intel's interest to get rid of x86 so that they could get rid of AMD, but AMD blindsided them by extending the x86 architecture to 64 bit.
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Just lets not pretend that this a good situation for the consumer. Even a massive performance gain of 50% - this is the equivalent of the past 10 years of Intel cores or so - is not enough to offset the emulation penalty. But since we are in Apple thread, they can manage to do the transition and their users will eventually having the net perfomance gain once most applications are re-compiled for ARM64. It is unfortunately not as easy in the PC space.

Yeah I do agree that emulation would be a bad alternative, if ARM ever manages to trounce x86-64 in every metric. I would think that if that ever happens, AMD, Intel, Microsoft and any other heavyweights in PC land would have to leverage their influence to start recompiling on a massive scale for ARM64. That's assuming that Intel and AMD would have abandoned x86 and put their resources towards their own ARM64 designs.
 
Last edited:

Nothingness

Platinum Member
Jul 3, 2013
2,405
735
136
What about the whole "ISA doesn't matter anymore" thing? That's what I keep seeing now whenever I browse the internet on this subject. Whatever x86 started as, it's now become something completely different.
I'm saying that from a compiler and assembly language developer point of view. And also from a chip designer point of view. We are talking about an ISA that was started with the intent to be compatible at the assembler source level with i8080. Yes it has evolved, but that doesn't prevent it from stinking.

As I wrote the ISA doesn't matter because CPU designers spend a huge amount of time getting around the complexity of the underlying ISA. So all "traditional" ISA's end up in the same spot. But I can guarantee you developing an x86-64 decoder has not the same cost as developing an AArch64 one.

Heck, even Intel tried to kill off x86 before, with Itanium and look how that turned out? It was in Intel's interest to get rid of x86 so that they could get rid of AMD, but AMD blindsided them by extending the x86 architecture to 64 bit.
Itanium relied on the premise that compilers could extract enough performance. This failed badly.

And more important: never underestimate the resistance to change. If x86 is working and is good enough and you have legecy SW, why change? That still is applicable with the extra difficulty that x86 CPU performance is really good now.

You'd need tremendous advantage to consider switching. Like twice the perf for the same cost and power.

I'm talking here from an enterprise point of view.
 

Nothingness

Platinum Member
Jul 3, 2013
2,405
735
136
I always say that the ISA does matter.
I respectfully disagree :)

At high-level of performance, the ISA doesn't matter (I mean "traditional" ISA, not VLIW or other different things). If you have a free pass for silicon area and great engineers you can make that horrible x86 fly.

At the lower end of the spectrum, where area matters, the ISA certainly matters. And that's why ARM forked its instruction set with the v7/v8-M architectures.

And for assembly language programmers the ISA matters. I almost never write x86 assembly, even reading it drives me nuts. ARM, MIPS, PPC, SPARC etc. are much more readable to me.
 

soresu

Platinum Member
Dec 19, 2014
2,657
1,858
136
The CGChannel post outlines that this is not feature complete relative to the desktop Photoshop, supposedly updates will follow - not sure I would buy into that considering it's on a subscription model, rather than the one off payment for Serif Affinity Photo.
 

soresu

Platinum Member
Dec 19, 2014
2,657
1,858
136
Perhaps Adobe already finished PS for ARM MacBook so they forked iPad version too.
It looks like preparation for mass ARM migration is in progress.
It has already been underway for some time, though not quite the way you think if I'm correct.

I believe they have finally bestirred themselves to overhaul the codebases of their major moneymakers at Adobe (Photoshop, Premiere, After Effects), this in combo with the acquisition of Allegorithmic (Substance Painter/Designer/Alchemist) might be a significant bid to retain future relevance.

Overhauling those old code bases to be more modular, and easier to develop a UI for on new platforms (like iOS/iPad OS and Android/Chrome OS) is essential to their future survival.

To whit, Photoshop for Windows still lacks the capability to properly scale the UI with the system DPI settings, more than a decade after small 1080p monitors became available, let alone 4K more recently - if that doesn't show how bad the current UI code base is, I don't know what will.
 

Richie Rich

Senior member
Jul 28, 2019
470
229
76
It has already been underway for some time, though not quite the way you think if I'm correct.

I believe they have finally bestirred themselves to overhaul the codebases of their major moneymakers at Adobe (Photoshop, Premiere, After Effects), this in combo with the acquisition of Allegorithmic (Substance Painter/Designer/Alchemist) might be a significant bid to retain future relevance.

Overhauling those old code bases to be more modular, and easier to develop a UI for on new platforms (like iOS/iPad OS and Android/Chrome OS) is essential to their future survival.

To whit, Photoshop for Windows still lacks the capability to properly scale the UI with the system DPI settings, more than a decade after small 1080p monitors became available, let alone 4K more recently - if that doesn't show how bad the current UI code base is, I don't know what will.
I see your point. Modularity is the future, no doubt. They could release RISC-V versions very easy as well (Apple valuation is 650x higher than ARM corp., Apple can buy ARM, move to RISC-V or develop its own RISC ISA).

However my point of view is HW development plan. If Apple decided release CPU/SoC for MacBook Air in 2020, this decision had to be maden 3 years ago. At the same time Apple started SW adaptation internally (ARM OSX) and externally (noticed SW companies to get ready for ARM migration). And maybe that's why Adobe decided to overhaul the codebases.
 
  • Like
Reactions: wintercharm

Nothingness

Platinum Member
Jul 3, 2013
2,405
735
136
(Apple valuation is 650x higher than ARM corp., Apple can buy ARM, move to RISC-V or develop its own RISC ISA).
How do you get ARM valuation since it's part of Softbank?

However my point of view is HW development plan. If Apple decided release CPU/SoC for MacBook Air in 2020, this decision had to be maden 3 years ago. At the same time Apple started SW adaptation internally (ARM OSX) and externally (noticed SW companies to get ready for ARM migration).
If I had to guess, I'd say Mac OS X for ARM has been in "production" internally for at least 5 years.
 

soresu

Platinum Member
Dec 19, 2014
2,657
1,858
136
I see your point. Modularity is the future, no doubt. They could release RISC-V versions very easy as well (Apple valuation is 650x higher than ARM corp., Apple can buy ARM, move to RISC-V or develop its own RISC ISA).
I think many overlook Microsoft's personal (cus of course corporations are people!) contributions to and interest in the E2 EDGE ISA and uArch, especially considering there was a rumour that they ported Windows to it fairly recently, which makes it the only other ISA than ARM receiving new attention (new as in not a baseline standard, as ARM isn't really yet in Windows till the app library catches up).
 

soresu

Platinum Member
Dec 19, 2014
2,657
1,858
136
Apple valuation is 650x higher than ARM corp., Apple can buy ARM
You may have forgotten that ARM is no longer the independent company it was in 2016.

SoftBank bought ARM for reasons I imagine that did not extend to awesome licensing royalties, but rather market control (at least to some extent).

Whatever the exact reason or reasons SoftBank did it, they would not release ARM now - point of fact Apple buying ARM might be seen as fundamentally anti-competitive, to say nothing of pointless considering they have a superior uArch to ARM's Cortex flagships currently, at least from a raw performance perspective, I'm not sure exactly how it balances to A76 from a perf/watt perspective (though I would imagine the gap is not quite as impressive as the one for raw performance).

Edit: Nothingness beat me to it.
 
Last edited:
  • Like
Reactions: Nothingness

soresu

Platinum Member
Dec 19, 2014
2,657
1,858
136
More news from the open ARM platform GPU front, support for Qualcomm's latest Adreno 6xx series seems to be progressing with both OpenGL and Vulkan drivers.

Link here.

And hell followed with him.....

Chromebooks, I meant Chromebooks followed with him..... :sweatsmile:
 
  • Like
Reactions: wintercharm

Nothingness

Platinum Member
Jul 3, 2013
2,405
735
136
to say nothing of pointless considering they have a superior uArch to ARM's Cortex flagships currently, at least from a raw performance perspective, I'm not sure exactly how it balances to A76 from a perf/watt perspective (though I would imagine the gap is not quite as impressive as the one for raw performance).
According to Andrei's review of iPhone 11, Cortex-A76 is slightly more power efficient than A11/A12/A13. But its performance is about that of Apple A10.