Solved! ARM Apple High-End CPU - Intel replacement

Page 18 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Richie Rich

Senior member
Jul 28, 2019
470
229
76
There is a first rumor about Intel replacement in Apple products:
  • ARM based high-end CPU
  • 8 cores, no SMT
  • IPC +30% over Cortex A77
  • desktop performance (Core i7/Ryzen R7) with much lower power consumption
  • introduction with new gen MacBook Air in mid 2020 (considering also MacBook PRO and iMac)
  • massive AI accelerator

Source Coreteks:
 
  • Like
Reactions: vspalanki
Solution
What an understatement :D And it looks like it doesn't want to die. Yet.


Yes, A13 is competitive against Intel chips but the emulation tax is about 2x. So given that A13 ~= Intel, for emulated x86 programs you'd get half the speed of an equivalent x86 machine. This is one of the reasons they haven't yet switched.

Another reason is that it would prevent the use of Windows on their machines, something some say is very important.

The level of ignorance in this thread would be shocking if it weren't depressing.
Let's state some basics:

(a) History. Apple has never let backward compatibility limit what they do. They are not Intel, they are not Windows. They don't sell perpetual compatibility as a feature. Christ, the big...

Thunder 57

Platinum Member
Aug 19, 2007
2,647
3,706
136
Can Apple's A13 get real work done? If it can then why hasn't Apple offered the A13 in an actual HEDT Workstation? Why haven't they come out with a new Mac Pro using the Wonder SoC?

All Apple's A13 SOC is in are iPhones and iPads only.

Exactly this. Can I run Handbrake on an A13? If I could and beat the snot out of everybody else while sipping power, I'm sure people would be all over that.

Also, the A13 can hit 4GHz, just because you say so? You don't know that. That's like saying Itanium could scale up to 4GHz+ just because other designs have.
 

Richie Rich

Senior member
Jul 28, 2019
470
229
76
Can Apple's A13 get real work done? If it can then why hasn't Apple offered the A13 in an actual HEDT Workstation? Why haven't they come out with a new Mac Pro using the Wonder SoC?

All Apple's A13 SOC is in are iPhones and iPads only.
There are few reasons:
  1. Market size. HEDT is just fraction of whole x86 market revenues driven mainly by servers. Smart phone market is 7x bigger than servers. IMHO x86 products in Apple portfolio are secondary.
  2. Apple has significant advantage in 6xALU just for 3 years since 2017. ISA swap needs some time for preparation in SW vendors.
  3. Laptops might be easier as they can use iPad (4-core) SOC version. But for HEDT they'd need different silicon, more cores, more I/O. This needs time to develop. If Apple hypothetically had made a decision in 2017, plus 4 years of development, this means maybe HEDT swap in 2021. But who knows. Rumor is ARM Mac Book Air this year.

Core parameters comparison:
CPUALUL1 cache [kB]L2 cache [MB]Die size incl L2 [mm2]
AMD Zen24320.53.64
Apple A1361288.0 (shared by two cores)6.74 (13.48 two cores)
Difference+50%+300%+1500%+85%

It's clear Apple core is a monster in every way, especially in die size.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,483
14,434
136
There are few reasons:
  1. Market size. HEDT is just fraction of whole x86 market revenues driven mainly by servers. Smart phone market is 7x bigger than servers. IMHO x86 products in Apple portfolio are secondary.
  2. Apple has significant advantage in 6xALU just for 3 years since 2017. ISA swap needs some time for preparation in SW vendors.
  3. Laptops might be easier as they can use iPad (4-core) SOC version. But for HEDT they'd need different silicon, more cores, more I/O. This needs time to develop. If Apple hypothetically had made a decision in 2017, plus 4 years of development, this means maybe HEDT swap in 2021. But who knows. Rumor is ARM Mac Book Air this year.

Core parameters comparison:
CPUALUL1 cache [kB]L2 cache [MB]Die size incl L2 [mm2]
AMD Zen24320.53.64
Apple A1361288.0 (shared by two cores)6.74 (13.48 two cores)
Difference+50%+300%+1500%+85%

It's clear Apple core is a monster in every way, especially in die size.
As said above. If it was so great, why is it not being used anywhere but phones ? Because its not as fast as the other solutions for what they do.
 

Thunder 57

Platinum Member
Aug 19, 2007
2,647
3,706
136
There are few reasons:
  1. Market size. HEDT is just fraction of whole x86 market revenues driven mainly by servers. Smart phone market is 7x bigger than servers. IMHO x86 products in Apple portfolio are secondary.
  2. Apple has significant advantage in 6xALU just for 3 years since 2017. ISA swap needs some time for preparation in SW vendors.
  3. Laptops might be easier as they can use iPad (4-core) SOC version. But for HEDT they'd need different silicon, more cores, more I/O. This needs time to develop. If Apple hypothetically had made a decision in 2017, plus 4 years of development, this means maybe HEDT swap in 2021. But who knows. Rumor is ARM Mac Book Air this year.

Core parameters comparison:
CPUALUL1 cache [kB]L2 cache [MB]Die size incl L2 [mm2]
AMD Zen24320.53.64
Apple A1361288.0 (shared by two cores)6.74 (13.48 two cores)
Difference+50%+300%+1500%+85%

It's clear Apple core is a monster in every way, especially in die size.

HEDT and servers are also very high margin. It's not like Apple would have to divert resource due to lack of money or personnel to create these things. They basically print money. If money were a problem then yes, it would not make sense to divert from your bread and butter. But they way you make it sound, it's just more money for the taking. I can't imagine it would be too difficult to put an A13 into a laptop and sell it as a high performance laptop.

Ultimately, that is where we just disagree. You call it a monster and say x86 should be very concerned. I say let's wait until we see some products. If they make an awesome ARM Macbook this year that performs anywhere near what you suggest it could do, I would gladly (well maybe not gladly :p ) admit that I was wrong.
 

whm1974

Diamond Member
Jul 24, 2016
9,460
1,570
96
HEDT and servers are also very high margin. It's not like Apple would have to divert resource due to lack of money or personnel to create these things. They basically print money. If money were a problem then yes, it would not make sense to divert from your bread and butter. But they way you make it sound, it's just more money for the taking. I can't imagine it would be too difficult to put an A13 into a laptop and sell it as a high performance laptop.

Ultimately, that is where we just disagree. You call it a monster and say x86 should be very concerned. I say let's wait until we see some products. If they make an awesome ARM Macbook this year that performs anywhere near what you suggest it could do, I would gladly (well maybe not gladly :p ) admit that I was wrong.
Man if I was I running Apple I wouldn't need the RDF, instead I'll would make the current Macs, MacBooks, A brand new up to date Mac Pro, and other products well worthwhile that people actually want and benefit from using.

And of course fire the entire Marketing Department and replace them highly knowledgeable employees.

However I much rather run a hardware company that designs and manufacture Open Source Hardware for FOSS systems.:D
 

beginner99

Diamond Member
Jun 2, 2009
5,208
1,580
136
I think I'm in between the 2 camps. Yes Apple cores are fast and could be used from a performance perspective in laptops or desktops. But the market is too small and it's probably (purely my guess) not worth it for Apple to make at least 2 additional SOCs that go into high end laptop and macpro.

Apple can scale their uarch up to 4GHz (+50%) if there is need for max performance.

citation needed. This is wishful thinking. The wider the chip the harder it is to clock it high. Wide-low with clock makes sense in mobile applications in which you have full control over the software. Why? because it gives you best performance/watt but control of software is needed because legacy software only has so much ILP and won't benefit from your beefy ultra-wide core.

Hence why x86 crowd needs to balance. A 2ghz 10-wide x86 cpu or whatever would probably be cool for servers running only modern software. But we all know what old crap actually runs out there. x86 is all about backwards compatibility and not having performance regressions even on old software. This means x86 favors high clocks vs core-width.

So the chips are made for the use-cases and hence make sense for both Apple and Intel/AMD. A13 might look great on these micro-benches but I wonder how it will fare running 40 year old cobol code in banks.

Yeah Apple could use it for Desktop and workstation but me personally I would never ever consider buying an Apple based server. They have no clue what long term backwards compatibility means including the OS itself.

Can Apple's A13 get real work done? If it can then why hasn't Apple offered the A13 in an actual HEDT Workstation? Why haven't they come out with a new Mac Pro using the Wonder SoC?

All Apple's A13 SOC is in are iPhones and iPads only.

As I have written in this thread alone probably a dozen times already, it's most likely not financially worth it because the whole mac market is too small. Each additional die will cost them double if not triple digits millions. I doubt the macbook market is large enough to make this worth it, it might be but what certainly isn't large enough is the macpro market. They would have to ditch the macpro or make it the only device remaining on X86 and keeping an x86 version for osx.
The better option is simply to pressure intel for good deals or switch to AMD if they offer a better deal. That will cost them far less and people don't buy apple because of the CPU they buy because it's apple.

On top of that all the software like stuff from Adobe, MS etc will all have to be changed to support ARM. If I was Apple CEO that would be too much trouble for too little gain.
 

Nothingness

Platinum Member
Jul 3, 2013
2,371
713
136
I think I'm in between the 2 camps. Yes Apple cores are fast and could be used from a performance perspective in laptops or desktops. But the market is too small and it's probably (purely my guess) not worth it for Apple to make at least 2 additional SOCs that go into high end laptop and macpro.
I'm with you on that.

citation needed. This is wishful thinking. The wider the chip the harder it is to clock it high. Wide-low with clock makes sense in mobile applications in which you have full control over the software. Why? because it gives you best performance/watt but control of software is needed because legacy software only has so much ILP and won't benefit from your beefy ultra-wide core.

Hence why x86 crowd needs to balance. A 2ghz 10-wide x86 cpu or whatever would probably be cool for servers running only modern software. But we all know what old crap actually runs out there. x86 is all about backwards compatibility and not having performance regressions even on old software. This means x86 favors high clocks vs core-width.
Exactly.

So the chips are made for the use-cases and hence make sense for both Apple and Intel/AMD. A13 might look great on these micro-benches but I wonder how it will fare running 40 year old cobol code in banks.
On micro-benchmarks? It looks great on SPEC 2006 which is not a micro-benchmark.

On top of that all the software like stuff from Adobe, MS etc will all have to be changed to support ARM.
You didn't notice Adobe and MS have a lot of experience on ARM software?
 
  • Like
Reactions: Etain05

Thala

Golden Member
Nov 12, 2014
1,355
653
136
Also, the A13 can hit 4GHz, just because you say so? You don't know that. That's like saying Itanium could scale up to 4GHz+ just because other designs have.

Invalid comparison. A13 is used in mobile and running at close to nominal process voltage while Intanium in overdrive. So there is voltage headroom for A13 while there is not much for Itanium. Second A13 is using low power cells as much as possible while Itanium is not.
Any mobile architecture has huge frequency headrooms when pushed. Physics dictate that it is possible.

TSMC demonstrated this for Cortex A72 - a core which is typically running below 2.5GHz in mobile SoCs. They did both increase the voltage and used high performance cells - but otherwise used a fully synthesizable softcore A72 from ARM.

Frequency2.8 GHz3.0 GHz3.5 GHz4.0 GHz4.2 GHz
Voltage0.775 V0.825 V0.95 V1.20 V1.375 V
 
Last edited:

Richie Rich

Senior member
Jul 28, 2019
470
229
76
Invalid comparison. A13 is used in mobile and running at close to nominal process voltage while Intanium in overdrive. So there is voltage headroom for A13 while there is not much for Itanium. Second A13 is using low power cells as much as possible while Itanium is not.
Any mobile architecture has huge frequency headrooms when pushed. Physics dictate that it is possible.

TSMC demonstrated this for Cortex A72 - a core which is typically running below 2.5GHz in mobile SoCs. They did both increase the voltage and used high performance cells - but otherwise used a fully synthesizable softcore A72 from ARM.

Frequency2.8 GHz3.0 GHz3.5 GHz4.0 GHz4.2 GHz
Voltage0.775 V0.825 V0.95 V1.20 V1.375 V
Exactly. Some people didn't realized that low-power and high-speed uarchs has common one important thing - pipeline divided into higher number very short stages. You can see that Zen2 could sip very little power in 64-core Rome (scaled down to 2.6GHz needs 0.8V?). Same way low power A72 can scale up to 4.0 GHz (with increased voltage) if it's manufactured with lower density at HP process. However Apple doesn't need to scale up at all. Their A14X will beat every every x86 CPU in ST at just 2.7-2.8 Ghz.

Just look at Geek Bench 5.1 sub tests:
Somebody mentioned Blender and Cinebench:
raytracing subtest ...... Zen2/A13 ........ 1710 / 2137 ....... that's +25% faster ....... clocks 4.6/2.6 ..... that's 2.21x higher PPC (+121% PPC)

This Apple core is a true 6xALU monster. As long as Intel and AMD stay with just 4xALU back-end designs they will not be competitive. They need to bring 6xALU cores too.

 
  • Like
Reactions: Etain05

Thunder 57

Platinum Member
Aug 19, 2007
2,647
3,706
136
This Apple core is a true 6xALU monster. As long as Intel and AMD stay with just 4xALU back-end designs they will not be competitive. They need to bring 6xALU cores too.

Why would they need to do that? After all you keep saying the smartphone market is 7x larger than the server market and therefore Apple doesn't need to bother in laptops/desktops/servers with ARM. You are all over the place.
 

Richie Rich

Senior member
Jul 28, 2019
470
229
76
Why would they need to do that? After all you keep saying the smartphone market is 7x larger than the server market and therefore Apple doesn't need to bother in laptops/desktops/servers with ARM. You are all over the place.
  1. Nuvia/exApple CPUs are coming into servers in 2023. Apple moved from 4xALUs to 6xALU in just 4 years. Nuvia in 2023 might have 6xALU as minimum but maybe even wider than that: monstrous 8xALU core. All those tiny slow x86 4xALU cores will be just a prehistoric joke (2016 tech) in compare to this monster.
  2. Generic Cortex will move to 6xALU core soon too. A77 has higher PPC than Zen2 already. And they brings +25% performance every year.

Yes, x86 world can sit still with prehistoric 4xALU design and slowly die. Just like PowerPC and many others. That's also possible.
 

Thunder 57

Platinum Member
Aug 19, 2007
2,647
3,706
136
  1. Nuvia/exApple CPUs are coming into servers in 2023. Apple moved from 4xALUs to 6xALU in just 4 years. Nuvia in 2023 might have 6xALU as minimum but maybe even wider than that: monstrous 8xALU core. All those tiny slow x86 4xALU cores will be just a prehistoric joke (2016 tech) in compare to this monster.
  2. Generic Cortex will move to 6xALU core soon too. A77 has higher PPC than Zen2 already. And they brings +25% performance every year.

Yes, x86 world can sit still with prehistoric 4xALU design and slowly die. Just like PowerPC and many others. That's also possible.

Citation needed. You really seem to have a case of pe.... ALU envy. "Prehistoric joke"? The only joke I see is your signature. If a power sipping A13 with way less cores beat a 3950X, you don't think Apple would do everything they could to get that chip into larger form factors?

I'll put it another way. Why aren't companies running their servers on iphones? I mean, who needs racks? just get a few iphones and you're good to go. Saves a ton on A/C too I'd bet.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
TSMC demonstrated this for Cortex A72 - a core which is typically running below 2.5GHz in mobile SoCs. They did both increase the voltage and used high performance cells - but otherwise used a fully synthesizable softcore A72 from ARM.

Frequency2.8 GHz3.0 GHz3.5 GHz4.0 GHz4.2 GHz
Voltage0.775 V0.825 V0.95 V1.20 V1.375 V

I agree that the A13 could obviously attain higher frequencies with more voltage, but wouldn't the power curve be much steeper than the A72, due to having significantly more cache?
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
I agree that the A13 could obviously attain higher frequencies with more voltage, but wouldn't the power curve be much steeper than the A72, due to having significantly more cache?

The steepness of the (voltage/frequency) curve does not depend on cache size. Obviously the absolute clocks for larger caches or memory blocks are lower in general, but this does not change the steepness of the curve.
 

ksec

Senior member
Mar 5, 2010
420
117
116
Can Apple's A13 get real work done? If it can then why hasn't Apple offered the A13 in an actual HEDT Workstation? Why haven't they come out with a new Mac Pro using the Wonder SoC?

Also, the A13 can hit 4GHz, just because you say so? You don't know that. That's like saying Itanium could scale up to 4GHz+ just because other designs have.

To be fair, A TSMC 7nm ULP 5W Chip at 2.6Ghz to TSMC 7nm HP with 50W TDP, I think 4Ghz is not an invalid assumption. If you are talking about running the same A13 chip on 4Ghz then of coz it wouldn't work.

The question of why Apple hasn't moved A13 Core to HEDT or desktop is often technical, when in reality it is often the wrong question. Apple today can buy TSMC and INTEL with CASH, just let that sink it for a min.

There is nothing that stops Apple from developing a world class CPU for Mac based on the tech they have. But at what cost? Remember the Mac lineup has CPU that spans from 60W to 250W with many different Core Count. The amount of testing, QA and binning for each individual SKUs adds up. The only way this is going to work is if Apple does something similar to AMD 's chipset strategy, where there is only one High Performance Die for all Mac Segment.

And then you have to add up the software testing which requires way more cost. Adobe Photoshop wouldn't even work on an AMD hackintosh without some patches due to its use of Intel specific optimisation.

But at this point, judging from Apple's (in)action. I think they are in Full milking mentality where they will just milk the Mac platform for as long as they could.

Not to mention Apple is still held hostage by Thunderbolt, if you are going to reply TB is now royalty free, that is true. But the certification is still held by Intel. Which is like saying you are make MFi Chip all you want, but you still have to paid Apple to get it certified.
 
Last edited:
  • Like
Reactions: Richie Rich

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
The steepness of the (voltage/frequency) curve does not depend on cache size. Obviously the absolute clocks for larger caches or memory blocks are lower in general, but this does not change the steepness of the curve.

I figured that the large amount of L1 cache that the A13 has would consume a lot of power and generate a lot of heat if it were made to run at nearly twice the clock speed, especially if it maintains the 3ns latency.
 

Mopetar

Diamond Member
Jan 31, 2011
7,797
5,899
136
As said above. If it was so great, why is it not being used anywhere but phones ? Because its not as fast as the other solutions for what they do.

Look at how much difficulty AMD is having making inroads in the server market with Epyc even though they've got a clearly superior product right now in a lot of different categories. Why isn't that being used everywhere when it doesn't have the same problems that Apple would face due to introducing a completely different architecture? I think Apple has at least looked at the market and determined that it's going to be hard to break into it even if they can offer superior performance.

I don't know if there's any truth to it, but some years ago when a discussion about Apple using AMD processors came up, someone mentioned that it wasn't going to happen no matter how good AMD was because Apple had an exclusive deal with Intel that ran through 2021. Who knows to what extent Apple predicted the success of their own SoCs. If you told anyone that they'd have similar SPEC results to x86 ten years ago you'd get laughed out of the room assuming you could even keep a straight face yourself.

Both could be true. I suspect that Apple will make the move eventually, but they're going to want a smooth transition whenever they do decide to switch. They do have a history of these types of moves so I don't doubt they can succeed, but the world still runs on a lot of legacy software, and if it won't support the new architecture then Apple is just a non-starter as far as those customers are concerned.
 

naukkis

Senior member
Jun 5, 2002
702
571
136
Both could be true. I suspect that Apple will make the move eventually, but they're going to want a smooth transition whenever they do decide to switch. They do have a history of these types of moves so I don't doubt they can succeed, but the world still runs on a lot of legacy software, and if it won't support the new architecture then Apple is just a non-starter as far as those customers are concerned.

What people want, is to be able to run same binaries on desktop and phones. I did hope that Windows platform could have do it, but Intel newer get x86 chips to be able to work on phones. Apple can do it with their arm chips. Too bad that Apple ecosystem sucks. Hope that Microsoft soon realize that they too have to get ecosystem to include phones or desktop will become Android....
 

DrMrLordX

Lifer
Apr 27, 2000
21,582
10,785
136
I don't know if there's any truth to it, but some years ago when a discussion about Apple using AMD processors came up, someone mentioned that it wasn't going to happen no matter how good AMD was because Apple had an exclusive deal with Intel that ran through 2021. Who knows to what extent Apple predicted the success of their own SoCs. If you told anyone that they'd have similar SPEC results to x86 ten years ago you'd get laughed out of the room assuming you could even keep a straight face yourself.

There's some popularity surrounding the "Intel can't be beat because Intel" argument these days. In the long run, performance/TCO matters. Even a behemoth like Intel can (and will be) knocked off its pedestal. Otherwise, show me the cloud services still running Xeon Platinums, and some VC guys will wipe them out and steal their marketshare by offering Rome or Milan instances on a superior cost basis. IT shops are dynamic. They can't survive selling yesterday's hardware in a new package at insanely high power draw.

Apple CAN push Intel out of their own products, and they have the software development chops + brand loyalty to make it work. Moving outside of their current product space is a different matter, since the latter advantage mostly disappears, and the economic gains may not be that considerable even if they do make inroads against Intel. It might make more sense for them to spin off their CPU design team into their own corporation and sign a multi-year sweetheart deal for future ARM designs from that team.

Hope that Microsoft soon realize that they too have to get ecosystem to include phones or desktop will become Android....

What do you think was the reasoning behind Windows 8?
 

wintercharm

Junior Member
Apr 26, 2017
4
20
51
Apple's SoC designs are trapped inside a corporation that doesn't necessarily make its money by having the highest volume of shipments or by having the fastest hardware. Intel and AMD should fear their prowess, yes . . . but Apple is not a technology-first kind of company. I don't think they want to compete head-to-head with either company (per se).

On the flipside, I do see the possibility of (for example) A14-powered MacBooks being rational, especially if Intel can't offer them enough incentives to stay on x86. It may also make sense for Apple to work on their own cloud server architecture featuring heavily-modified versions of their SoCs. If their superior tech lets them push more software and services, then so be it.

Apple's strategy has always been to own the entire product stack. Vertically integrating means you don't have to worry about padding *someone else's* margins. That's why they're able to ship a vastly more powerful SoC in their smartphone, at a similar price to the other Android flagships out there (top end Samsung phones, for example).

It's however an issue if you must sell it also in $300 craptops. So core size/performance matters too.

Apple has never been interested in selling in the bottom basement low end market. But what they commonly do is repurpose the flagships from the last 2-3 years as "discounted" devices. Just like the iPhone ranges from $450 (iPhone 8) >> $1400 (11 Pro Max) and the iPad ranges from $330 (iPad) >> $1400 (iPad pro). Apple could end up with a MacBook lineup that ranges from $800 ("old" Arm Mac) >> $2700 (top end updated ARM mac) if they moved to ARM.

By ditching Intel, Apple can pad their own margins while simultaneously dropping price to really squeeze out the Laptop market, especially those who don't start selling ARM laptops. Look at the cost of the 11" iPad Pro w/ 256GB SSD ($950) vs MacBook Air w/256 GB SSD ($1300). Similar "class" of low power chip, but one is significantly cheaper than the other. Apple could bring those margins "in house" rather than paying Intel, even if the price didn't change. Or it could put the squeeze on every other laptop maker, by offering a 13" ARM MacBook Air that costs $999, and has significantly better battery life than most other laptops on the market.

Oh, and about that battery thing:

* 11" iPad Pro - running a 120Hz VRR retina display gets 10 hours on a 30 Wh battery.
* 13" Intel Macbook Air - running 60Hz retina display gets 12 hours on a 50Wh battery.
* 12" ARM Macbook Air with 120Hz VRR retina display should get (45/30*10=) 15 hours of battery.

Intel's constantly missed 10nm Targets over the last few years, with delay after delay, have cost them the technical lead in the market (AMD is now ahead) and have also caused Apple tons of problems. From the delayed 2016 MacBook Pro launch, to the overheating issues in a chassis that was anticipating 10nm chips would be ready to go…

If Apple can afford to do a custom chip every 2 years for the iPad pro, they can certainly afford to do a custom chip every 2 years for the Mac - a higher priced product, with similar sales.
 

wintercharm

Junior Member
Apr 26, 2017
4
20
51
As said above. If it was so great, why is it not being used anywhere but phones ? Because its not as fast as the other solutions for what they do.

That's like saying "If jet engines were so great, why was everyone using propeller engines in World war 2?" The answer is quite simply Cost and development time. These things do not happen overnight, and transitions take a while. Your complaints about ARM are the equivalent of standing up in 1945 and saying "if jets are so good why aren't they everywhere?". Well they got quite popular in the 60s and 70s, and now, they're a mainstay in the aviation industry.

x86 took over the market at a time when computers were *heavily* memory constrained, and CISC was hugely advantageous for dealing with that. Since then, there has been tremendous inertia due to so many tools and so much software being developed for x86/64. But we've now reached a point with silicon where RISC vs CISC is entirely a moot debate, because these chips have more cache onboard than supercomputers had *ram* back in the day, and both designs have converged in practice (with minor differences). ARM also found a huge surge in popularity and development thanks to smartphones. It was also smart about dumping the 32 bit ISA which drastically simplified core designs and allowed them to adopt a very good 64 bit ISA (AArch64). It is now poised to penetrate upwards into the laptop and desktop market, with a modern ISA that has many familiar developers, with more coming onboard. At the same time, Intel, realizing their mistake when it came to smartphones, attempted to push low power x86/64 into the smartphone market, but utterly failed to penetrate. Since that moment, the writing has been on the wall for ARM to turn around and push into Laptops, Desktops, and even the HPC market.

You can also turn around the question and ask yourself this:

If ARM was going to suck, why did Microsoft spend time writing Windows on ARM, and even building an emulator for legacy x86 stuff? Qualcomm would not have developed the 8cx, and started pushing it in a few laptops, if they didn't think there would be a market. Samsung's upcoming Galaxy Book S with 28 hours of battery life wouldn't be about to launch in a month, if ARM was considered "useless" and Amazon wouldn't be worked on an ARM-based server chip for AWS if they didn't think there was a market for it. Nvidia wouldn't have announced the roadmap for their Xavier and Orin Arm cores... that are pushing into new markets... Why has Adobe been quietly developing ARM versions of Photoshop, Illustrator, Publisher and Lightroom??. Why is Avid already working on ARM versions of Protools.

Similarly, Apple has been working on faster and faster ARM chips for some time, and has been testing the waters with scaling it up (A#x chips iPad) and actively cooling it (Apple TV) for quite some time. If you're reading between the lines, the Mac Pro appears to be apple's LAST professional x86/64 machine. By the time they are ready to update it again (5 years from now) they'll likely have an ARM HPC chip ready.

As for all this talk about "It can't do real work yet" -- when the 8cx laptops come out, and when Apple's arm MacBooks come out, we will be able to compare ARM vs x86/64 performance in a bunch of real world tasks, across 2 separate OSes (Windows ARM notebooks vs Windows x86 notebooks & Apple ARM notebooks vs Apple x86 notebooks), and will have plenty of data to understand whether ARM is ready for "real work" yet or not...

I can imagine some people who haven't been paying attention, or have been in denial about the Spec FP and Int numbers are going to be in for a heck of a shock when that happens...