Solved! ARM Apple High-End CPU - Intel replacement

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Richie Rich

Senior member
Jul 28, 2019
470
229
76
There is a first rumor about Intel replacement in Apple products:
  • ARM based high-end CPU
  • 8 cores, no SMT
  • IPC +30% over Cortex A77
  • desktop performance (Core i7/Ryzen R7) with much lower power consumption
  • introduction with new gen MacBook Air in mid 2020 (considering also MacBook PRO and iMac)
  • massive AI accelerator

Source Coreteks:
 
  • Like
Reactions: vspalanki
Solution
What an understatement :D And it looks like it doesn't want to die. Yet.


Yes, A13 is competitive against Intel chips but the emulation tax is about 2x. So given that A13 ~= Intel, for emulated x86 programs you'd get half the speed of an equivalent x86 machine. This is one of the reasons they haven't yet switched.

Another reason is that it would prevent the use of Windows on their machines, something some say is very important.

The level of ignorance in this thread would be shocking if it weren't depressing.
Let's state some basics:

(a) History. Apple has never let backward compatibility limit what they do. They are not Intel, they are not Windows. They don't sell perpetual compatibility as a feature. Christ, the big...

the2199

Junior Member
Oct 17, 2019
13
4
81
Maynard, you know you're a pain with your condescending tone? And it's funny you didn't notice that I was playing the devil's advocate.


Hint: compare the performance of the chips when the switch happened. Then project the required performance of the Apple ARM chip. As I already wrote I personally don't care about legacy or Windows, so I'd be more than happy with the level of performance they have achieved.

I'm too lazy (and sane) to read the rest of your rant. It's probably disgustingly full of blind adoration for Apple which prevents any form of civilized discussion (as you repeatedly proved on Realworldtech).

I just want Apple to bring a MacOS X ARM machine with Xcode to market, and they technically already are in a position to do that.
i am amazed and surprised that an apple fanboy answer was chosen as the right answer in a Forum about CPUs and Overclocking. your average fanboy will get a headache from just trying to understand what is an (ISA)





The use of fanboy is derogatory and its not to be used.


esquared
Anandtech Forum Director
 
Last edited by a moderator:
  • Haha
Reactions: Grazick

Nothingness

Platinum Member
Jul 3, 2013
2,422
754
136
i am amazed and surprised that an apple fanboy answer was chosen as the right answer in a Forum about CPUs and Overclocking. your average fanboy will get a headache from just trying to understand what is an (ISA)
Maynard definitely is not the average fanboy. He knows and understands a lot about Apple (he worked there) and micro-arch, but he's so biased that having any sensible discussion where you start saying anything against Apple is impossible. And that's a pity.




The use of fanboy is derogatory and its not to be used.


esquared
Anandtech Forum Director
 
Last edited by a moderator:

the2199

Junior Member
Oct 17, 2019
13
4
81
Maynard definitely is not the average fanboy. He knows and understands a lot about Apple (he worked there) and micro-arch, but he's so biased that having any sensible discussion where you start saying anything against Apple is impossible. And that's a pity.
thanks for clarification
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
This is what I wrote :)

429.mcf is at 3.79W while 470.lbm is at 6.27W. So there's a 2.5W difference and you will agree that it's unlikely a core running a program at full speed consumes less than 0.5W. So 3W looks like a good approximation of the lower bound of max power consumption of a core.

it is close to impossible to derive core power consumption this way. You need at the very least to look at the performance counters to get an idea where the power is spent. For example if you running at low IPC ther core power is drastically reduced. Likewise if you have much memory access system power ist drastically increased, but cannot be attributed to the cores. So without performance counters your argument is moot.

Where did you get this A76 figure from? Again looking at Andrei results, for SD855 I see 458.sjeng at 1.68W and 470.lbm at 3.44W. That'd be about 2W.

From power simulations on Cortex A76 RTL. This way I can break down power usage to each submodule. But i did not run 470.lbm either so i give you that some compute kernels might use higher power at core level.

You really think that all the 4 big cores can ever run at full speed at the same time? And that this TDP will be sustained? Or like Intel max power consumption can exceed 7W for short periods of time?

On my Cortex A73 tablet i can run 8 cores 100% without any frequency reduction. On NEON heavy code there is sometimes a slight (10%) frequency reduction but i have never seen more than that - and thats with 8 cores running.
I surely going to test this for Cortex A76 once i get my Surface Pro X.
What i do know is, that the ARM devices typically do not increase Voltage in single core situations where the power headroom is huge.


Look at this curve, and let's start talking about increasing voltage and TDP :)


Where is this coming from? Does not look anything like a N7 Frequency Voltage curve.

Oh yes binning. How much does that bring to SD855 derivatives? 0.2 GHz? :D

Little more than that, I have seen deviations for up to 0.4GHz between timing sign-offs nominal vs. slow/worst in the 3GHz range.

I have this SD845 board in front of me. I'm looking at a way to put a fan on it because it keeps on throttling. Locking frequency make it reach 100°C with a single core running. If I don't lock frequencies this thing throttles within a few seconds. Yes that's a smartphone chip with a heatspreader.

Cannot comment on this without more info.
 
Last edited:

Steelbom

Senior member
Sep 1, 2009
438
17
81
I agree but for different reasons. The reasons are market size and market segments. While macbook market might be big enough (not sure) for a custom chip, you still limit yourself to that 1 chip. So how to you differentiate between a macbook air and a macbook pro besides body/screen size? Very little incentive to get the bigger, more expensive product.

But the real issue then are macpro. That market is way, way too small to justify an own chip but using the same as in macbooks doesn't work either. Even with a chiplet strategy, the "IO die" would probably be to costly to make this worth it.

In summary apples laptop/desktop market is far to small to allow them to make enough custom ARM chips to serve the same market as today. If Apple goes ARM, the macpro and and high performance mac is dead. Of course this is entirely possible but not very likely.



Apple does it by having "huge" cores and "huge" die sizes.

It's a fact that get's easily forgotten how big Axx cores (bigger than skylake) and the whole SOC actually are. And i have said this a lot on these forums already. This works fine, if your newest SOC goes only in devices that cost >$700. Intel on the other hand still has far greater volume and needs to sell chips profitable in $300 craptops. Hence die size matters much, much more. We can see this with their 14nm supply issues right now. It's not because of demand but because of AMD having to ship >4-core dies meaning much lower chip output in terms of units while needing more wafers. So the trade-off is not just IPC vs frequency. It also involves die size.
Actually that's a good point. Apple would need to make custom versions for the Mac variants and it's probably not worth working on that considering the number of them they sell.
 

Nothingness

Platinum Member
Jul 3, 2013
2,422
754
136
it is close to impossible to derive core power consumption this way. You need at the very least to look at the performance counters to get an idea where the power is spent. For example if you running at low IPC ther core power is drastically reduced. Likewise if you have much memory access system power ist drastically increased, but cannot be attributed to the cores. So without performance counters your argument is moot.
You have a point here.

From power simulations on Cortex A76 RTL. This way I can break down power usage to each submodule. But i did not run 470.lbm either so i give you that some compute kernels might use higher power at core level.
We both know how inaccurate RTL power estimations can be (even Spice simulations can be way off) and we both know you could not run a full real workload in these conditions. So it's not necessarily more accurate than the way I tried to guess power.

On my Cortex A73 tablet i can run 8 cores 100% without any frequency reduction. On NEON heavy code there is sometimes a slight (10%) frequency reduction but i have never seen more than that - and thats with 8 cores running.
I surely going to test this for Cortex A76 once i get my Surface Pro X.
No frequency reduction? How did you measure that? Did you compare single core vs multiple core? What is single core frequency? I find it hard to believe given all the SoC I have owned on a SBC require some non negligible cooling.

What i do know is, that the ARM devices typically do not increase Voltage in single core situations where the power headroom is huge.
I'm not convinced this tells a lot about what happens at top (boost) frequency.

Where is this coming from? Does not look anything like a N7 Frequency Voltage curve.
From the same page I already linked.


Little more than that, I have seen deviations for up to 0.4GHz between timing sign-offs nominal vs. slow/worst in the 3GHz range.
Good to know, thanks!

Cannot comment on this without more info.
I'm running this:

Hard to say if it's due to a bad board design but that throttles a lot.
 
  • Like
Reactions: Thala

Thala

Golden Member
Nov 12, 2014
1,355
653
136
You have a point here.


We both know how inaccurate RTL power estimations can be (even Spice simulations can be way off) and we both know you could not run a full real workload in these conditions. So it's not necessarily more accurate than the way I tried to guess power.

Indeed they can be, but in many cases the problem is in front of the computer :) You need to chose the right parameters, clock-gating, wireload-model etc. but if you are experienced the deviation is in the range of 20%. In this particular case we did not run gate-level simulation but for other architectures, where we actually also can test against actual silicon the RTL level power simulation is quite ok (if you know what you are doing).


No frequency reduction? How did you measure that? Did you compare single core vs multiple core? What is single core frequency? I find it hard to believe given all the SoC I have owned on a SBC require some non negligible cooling.

Nothing sophisticated, just looking at Windows Task Manager/Resource Monitor. This is also what i am using to check that all cores running @ 100% load. Single core frequency is 2.25 GHz and Multicore the same. You are getting all cores to 100% when you chose 8 parallel threads in the benchmarking app. Currently for benchmarking i have only 7zip and Povray - compiled both myself for ARM64 - they both load all 8 cores though.



Hmm no clue where Andrei got this from. Maybe an Apple presentation? In any case, frequency-voltage curves appearance are roughly independent from architecture if we ignore the scaling of the frequency axis and only depend on process - so it is quite a bit too steep at 1.0V.
In any case this also means we can deduce such a curve also with other architectures using N7, like Ryzen 2.


I'm running this:

Hard to say if it's due to a bad board design but that throttles a lot.

Ah ok thanks. Yup Cortex A75 @2.8GHz is certainly quite a bit more power than my Cortex A73@2.25GHz - both are 10nm after all.
 

Andrei.

Senior member
Jan 26, 2015
316
386
136
For the sake of clearing facts; the voltages are from the chip's voltage tables, I won't comment more on that. There's also GPU and the small cores.

Also this is the first time I hear f/v curves being common across process and not taking uarch into account. How the heck do you account for hitting new critical path limits above certain frequencies?
 
  • Like
Reactions: Lodix

name99

Senior member
Sep 11, 2010
404
303
136
I agree but for different reasons. The reasons are market size and market segments. While macbook market might be big enough (not sure) for a custom chip, you still limit yourself to that 1 chip. So how to you differentiate between a macbook air and a macbook pro besides body/screen size? Very little incentive to get the bigger, more expensive product.

But the real issue then are macpro. That market is way, way too small to justify an own chip but using the same as in macbooks doesn't work either. Even with a chiplet strategy, the "IO die" would probably be to costly to make this worth it.

In summary apples laptop/desktop market is far to small to allow them to make enough custom ARM chips to serve the same market as today. If Apple goes ARM, the macpro and and high performance mac is dead. Of course this is entirely possible but not very likely.



Apple does it by having "huge" cores and "huge" die sizes.

It's a fact that get's easily forgotten how big Axx cores (bigger than skylake) and the whole SOC actually are. And i have said this a lot on these forums already. This works fine, if your newest SOC goes only in devices that cost >$700. Intel on the other hand still has far greater volume and needs to sell chips profitable in $300 craptops. Hence die size matters much, much more. We can see this with their 14nm supply issues right now. It's not because of demand but because of AMD having to ship >4-core dies meaning much lower chip output in terms of units while needing more wafers. So the trade-off is not just IPC vs frequency. It also involves die size.

This claim (Apple cores larger than Intel) is problematic. I'm nor interested in looking up the latest numbers YET AGAIN, but I have repeatedly debunked it.
For the SOCs we have sizes for are eg A11 or A12 are 8x mm^2. Apple tends to keep the phone SoCs at just under 100mm^2.
The closest Intel competitor is something like dual core Skylake in 102mm^2.
Obviously Skylake has been succeeded by various other lakes --- but those (at least the actually shipping, 14nm versions) are larger.
Then you get into the arguing that Skylake has more IO drive circuits. Sure, but Apple has more misc logic (NPU, security, motion coprocessor, decent ISP, ...)
If you eyeball the cores, Intel looks larger than Apple --- but then people start complaining about how much of the cache or NoC should be included.

Go look at the die shots if you don't believe me:
Apple:
https://www.anandtech.com/show/13393/techinsights-publishes-apple-a12-die-shot-our-take

You can choose A12 or A11 as you like (A11 on TSMC 10nm).

Skylake die shots and various numbers are here:
https://en.wikichip.org/wiki/intel/microarchitectures/skylake_(client)

Bottom line:
- if you want UNDERSTANDING, it's silly to say that Intel has smaller cores than Apple. Or smaller SoCs.
- Will Intel's future cores on 10nm be smaller? Who knows? Let's see when they ship in useful volumes. Of course soon after that, Apple will ship in 5nm.
- A much more intelligent question to ask is whether Apple uses more transistors than Intel? And if so, why their SoCs and cores are still the same size as Intel? And once you understand both parts of the answer to that question, you might well start to ask exactly WTF was the motivation behind Intel pushing for uber-density with both 10nm, and then 7nm...
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
Also this is the first time I hear f/v curves being common across process and not taking uarch into account. How the heck do you account for hitting new critical path limits above certain frequencies?

I did say the appearance is roughly similar aside from scaling of the frequency axis. This means that two different architectures may have different frequencies say at certain voltage but the relative scaling stays the same. Or more strictly speaking cycle time is roughly inverse proportional to (Vcc-Vth)/Vcc. The proportionality factor depends on the (capacitance) of the critical path of the particular architecture.
 

insertcarehere

Senior member
Jan 17, 2013
639
607
136
I agree but for different reasons. The reasons are market size and market segments. While macbook market might be big enough (not sure) for a custom chip, you still limit yourself to that 1 chip. So how to you differentiate between a macbook air and a macbook pro besides body/screen size? Very little incentive to get the bigger, more expensive product.

But the real issue then are macpro. That market is way, way too small to justify an own chip but using the same as in macbooks doesn't work either. Even with a chiplet strategy, the "IO die" would probably be to costly to make this worth it.

In summary apples laptop/desktop market is far to small to allow them to make enough custom ARM chips to serve the same market as today. If Apple goes ARM, the macpro and and high performance mac is dead. Of course this is entirely possible but not very likely.

For laptops, I fail to see why it's not possible to design one laptop chip and fuse off parts/adjust clocks to differentiate performance between different product lines. Mac Pros (and to a lesser extent iMacs) are a tougher nut to crack with this approach but certainly not impossible, given the die sizes involved its almost certain that they'd have lots of chips not up to stuff for top clocks/full functions.

When one lightning core in an A13 SoC draws 4-5w to get similar single-threaded performance as a Ryzen 2 core in a 3900x drawing ~18w with no process advantage, that's a heck of an incentive to do ditch x86.
 

Richie Rich

Senior member
Jul 28, 2019
470
229
76
For laptops, I fail to see why it's not possible to design one laptop chip and fuse off parts/adjust clocks to differentiate performance between different product lines. Mac Pros (and to a lesser extent iMacs) are a tougher nut to crack with this approach but certainly not impossible, given the die sizes involved its almost certain that they'd have lots of chips not up to stuff for top clocks/full functions.

When one lightning core in an A13 SoC draws 4-5w to get similar single-threaded performance as a Ryzen 2 core in a 3900x drawing ~18w with no process advantage, that's a heck of an incentive to do ditch x86.
There is a way how to scale performance with just one chip:
  • MacBook Air .... low power / low clocks
  • MacBook Pro ... higher clocks
  • iMac ................. 2xCPU + 2x mem channels (not perfect due to NUMA obsticles although easy)
  • MacPro ............ 4xCPU (AMD EPYC1/Naples style) 4x mem channel

4x8 high-performance cores is 32c total, that sounds pretty reasonable for iMacPro as a replacement of 18c Xeon.
 

name99

Senior member
Sep 11, 2010
404
303
136
There is a way how to scale performance with just one chip:
  • MacBook Air .... low power / low clocks
  • MacBook Pro ... higher clocks
  • iMac ................. 2xCPU + 2x mem channels (not perfect due to NUMA obsticles although easy)
  • MacPro ............ 4xCPU (AMD EPYC1/Naples style) 4x mem channel

4x8 high-performance cores is 32c total, that sounds pretty reasonable for iMacPro as a replacement of 18c Xeon.

Exactly how Apple will handle scaling from iPhones to Mac Pro's remains THE fascinating question, as opposed to the more silly analyses you see on the web.
The choice of chiplets rather than multiple bespoke die sizes (or a single large die that's fused off) seems more or less obvious. But even once you accept that, there remain a variety of choices
- how much of IO and memory control do you put in a separate hub(s) vs on the main SoC. Separate hub means you can use a cheaper process. But memory controller on the SoC means you get memory bandwidth scaling with CPU count in a nice way.

- do you use as your baseline chiplet something like at A14X? This means only one die; but also means a fair fraction (15-25%?) of the die is things like security, ISP, media encode/decode that don't need to have multiple copies on iMacs and Mac Pros.
Or do you have a third Z SoC that's a stripped-down X SoC? Remove all that one-off stuff, and add chiplet communication channels.

- what to do about GPU? If you do the math, the A12X GPU, as far as GeekBench5 compute results is concerned, is about 1/6th the top results for an iMac Pro. So assume a 50% boost for the A13X GPU (iPhone GPU saw 50% boost) and assuming the (possibly very dodgy...) hypothesis that GB5 Compute is a good representation of all a GPU needs to do, that means you need 4 A13X chiplets to match iMac Pro. It's within the bounds of plausibility, but it's not ideal -- sync between the different GPUs will be much more expensive than on a monolithic GPU.
The second issue is bandwidth. Apple's System cache works extremely well, as does tiling, but you still want bandwidth for some desktop GPU tasks...

So four alternatives present themselves
+ give up on GPU, at least this time round. Maybe have one iPad-class GPU somewhere (in the IO hub?) for low-power work, plus use a standard nV or AMD external GPU on PCIe.
+ design an Apple GPU based on what Apple already has, but scaled up much larger, taking over an entire die. Then package that with HBM, and connect it either via PCIe or via some Apple internal bus to the rest of the system.
+ use a GPU that's distributed across the chiplets, one piece on each of the 1, 2, 4 chiplets used in different models. Plus HBM somewhere on the same interposer.
+ finally like the above, but no HBM and just rely on LP-DDR5 (run fast and wide)

You can then go through the same GPU analysis wrt NPU...

So what WILL Apple choose? I don't think we can usefully go beyond listing possibilities.
The exact choice will depend on both performance factors and cost factors -- and we outsiders don't have a clue as to either, certainly not enough even to make a reasonable guess.
 
  • Like
Reactions: vspalanki

Thala

Golden Member
Nov 12, 2014
1,355
653
136
- what to do about GPU? If you do the math, the A12X GPU, as far as GeekBench5 compute results is concerned, is about 1/6th the top results for an iMac Pro. So assume a 50% boost for the A13X GPU (iPhone GPU saw 50% boost) and assuming the (possibly very dodgy...) hypothesis that GB5 Compute is a good representation of all a GPU needs to do, that means you need 4 A13X chiplets to match iMac Pro. It's within the bounds of plausibility, but it's not ideal -- sync between the different GPUs will be much more expensive than on a monolithic GPU.

GPU is a separate discussion. They could as well use a standard PCIE discrete GPU from NVidia or AMD for iMac Pro for the time being.
 

Thunder 57

Platinum Member
Aug 19, 2007
2,675
3,801
136
So you don't have noticed that Apple A13 in a freakin phone is as powerful as fastest desktop x86 cpus. That's why whole discussion - Apple could emulate x86 with their desktop ARM cpu and have still almost comparable performance to fastest x86 cpus and with native ARM code have the fastest desktop machine easily. It's so much better than x86 cpus that question actually is why Apple haven't done the switch yet.

So why hasn't it been done? Why is nobody thinking about doing it? I am so tired of armchair engineers that seemingly know better than the real engineers that produce these products.

My thing is, if x86 sucks as much as you guys say, then why is it so dominant? x86 has gone up against several competing ISAs over the years, and come out on top from both a price and performance perspective.

There must be more to it than just "x86 sucks."

I wish I could like this a thousand times. "x86 sucks y'all". "OK, then why are we all using it?".
 

soresu

Platinum Member
Dec 19, 2014
2,664
1,863
136
GPU is a separate discussion. They could as well use a standard PCIE discrete GPU from NVidia or AMD for iMac Pro for the time being.
They seem to have tied themselves to AMD strangely, despite the lack of CUDA.

That insane 56 TFLOP quad Vega 2 GPU based Mac Pro is just sitting idle with most current GPU path tracers at the moment - sadly they deprecated OpenCL too, so only Metal, or Vulkan/MoltenVK based renderers would work (there aren't many, except an experimental Octane version).
 

soresu

Platinum Member
Dec 19, 2014
2,664
1,863
136
I wish I could like this a thousand times. "x86 sucks y'all". "OK, then why are we all using it?".
In a few words, the Wintel monopoly that began in the 90's (or earlier?).

I'm not saying that the current x86 uArch's aren't good (they definitely are), but the Wintel force is not something to be ignored as a reason for x86's domination of the marketplace.
 

soresu

Platinum Member
Dec 19, 2014
2,664
1,863
136
- Will Intel's future cores on 10nm be smaller? Who knows? Let's see when they ship in useful volumes. Of course soon after that, Apple will ship in 5nm.
5nm EUV FinFET will be a significant improvement over Intel's 10nm in area I believe, and likely to some lesser degree in power efficiency.

However, I believe that it will take 3nm/MBCFET to give us a true 'full node' successor to 7nm, at least where both power and area scaling are equally benefiting from the change (at least in a manner similar to a 16/14nm to 7nm shrink).

I think 3nm could well be competitive with Intel's '7nm'.
 

soresu

Platinum Member
Dec 19, 2014
2,664
1,863
136
Because Windows. Apple obviously doesn't care about Windows.
I'm not even 100% sure that they care that much about Mac's at this point at all.

That $999 monitor stand thing was practically Apple trolling their own userbase/customers - when you get that out of touch you can't possibly be 100% on the ball.
 
  • Haha
Reactions: Nothingness

scannall

Golden Member
Jan 1, 2012
1,946
1,638
136
In a few words, the Wintel monopoly that began in the 90's (or earlier?).

I'm not saying that the current x86 uArch's aren't good (they definitely are), but the Wintel force is not something to be ignored as a reason for x86's domination of the marketplace.
Earlier, by a lot (in computer years) . IBM saw these puny upstarts making a wild west of the computer business (and making money), and we can't have that. Sooo, they decided to enter the PC business. You have to realize in 1981, the whole idea of a computer in every home let alone every pocket was just crazy. The ISA or any of that wasn't important to IBM. And they didn't want to spend the internal resources on what may be a fad.

So shopping around for sources, they were originally going to go with CP/M for an OS. Until Gary Killdall failed to show up. Going for Plan B, they chose Microsoft. At the time it was a garage company. But they had done some headliner work for a company called Apple (Worth not quite a bililon at the time) ... See where this is going yet? Like I said, it was the wild west and you could actually call a CEO if you had something interesting to say.

For CPU's the ONLY company that was willing to second source was Intel. Motorola had a much better product (And ISA) at the time, but refused to license for second sourcing.

Soooo, almost 40 years later. IBM is long gone from the PC biz, but we are still stuck with their anointed.

It was a crazy time back then, and exciting. I won't say that standardization is bad. But it certainly crimps progress.
 
Last edited:

soresu

Platinum Member
Dec 19, 2014
2,664
1,863
136
Earlier, by a lot (in computer years) . IBM saw these puny upstarts making a wild west of the computer business (and making money), and we can't have that. Sooo, they decided to enter the PC business. You have to realize in 1981, the whole idea of a computer in every home let alone every pocket was just crazy. The ISA or any of that wasn't important to IBM. And they didn't want to spend the internal resources on what may be a fad.

So shopping around for sources, they were originally going to go with CP/M for an OS. Until Gary Killdall failed to show up. Going for Plan B, they chose Microsoft. At the time it was a garage company. But they had done some headliner work for a company called Apple (Worth not quite a bililon at the time) ... See where this is going yet? Like I said, it was the wild west and you could actually call a CEO if you had something interesting to say.

For CPU's the ONLY company that was willing to second source was Intel. Motorola had a much better product (And ISA) at the time, but refused to license for second sourcing.

Soooo, almost 40 years later. IBM is long gone from the PC biz, but we are still stuck with their anointed.

It was a crazy time back then, and exciting. I won't say that standardization is bad. But it certainly crimps progress.
The first PC in my house was an IBM compatible with a CD drive, we had a Jurassic Park game and Rebel Assault.

Aaahhh those were the days.....
 

Nothingness

Platinum Member
Jul 3, 2013
2,422
754
136
I wish I could like this a thousand times. "x86 sucks y'all". "OK, then why are we all using it?".
Easy: because Intel and AMD used to make the best CPU at an affordable price. And legacy of course as others wrote.

Like it or not x86 as an instruction set sucks; did you do assembly language programming on it? And on other assembly language to compare? It's just an abomination; it has gotten better with 32-bit and x86-64, but its 8080 roots still show.

But Intel and AMD implementations of x86 are good and this is what matters to the end user (plus legacy obviously).
 
  • Like
Reactions: the2199

Nothingness

Platinum Member
Jul 3, 2013
2,422
754
136
For CPU's the ONLY company that was willing to second source was Intel. Motorola had a much better product (And ISA) at the time, but refused to license for second sourcing.
Ha 68k, that was a nice ISA. It had its shortcomings, but it was so much better than that brain dead x86. Some Motorola people must regret their decision, as much as Otellini regrets turning down Jobs for iPhone CPU.