• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Will x86 be here "forever"?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
x86 should've died in the 90's, and RISC processors (such as PowerPC) should've taken over. But obviously that didn't happen.

It didn't seem to be all that great compared to x86 on any measurable level, at least with Macs using G3/G4/G5 vs. PCs. Hell, I tried to use a PowerMac G5 recently, with a Dual 2Ghz setup and 4GB of ram, and it ran like total ass on modern websites. An old 1.8Ghz Opteron I have runs better than it did, with 2GB of ram.

http://forums.anandtech.com/showthread.php?t=2149304

We've had that discussion before. But the short answer is that the good elements of RISC are already in x86 now, for the most part. Combine that with efficient coding and good compiling tools and work, and you can get fantastic performance even with modest resources. However, many dev houses are quite sloppy with things due to budget/time constraints, and so you get bloated, buggy crap that runs like poop while eating tons of ram and not even remotely touching what is reasonably possible with x86.
 
I thought iOS and Android had demonstrated the vast majority of people don't care for x86 compatibility. Now all they need are devices compatible with the apps they bought on a store where x86 doesn't matter.

I tolerate Android simply because it's either that or iOS in extremely low power draw devices such as smart phones. I suspect the day is drawing close to x86 in phones however. You show me an Intel powered phone running a full fledged Windows and I'll leave Android behind for good. I don't own a tablet for myself, though both of my children have Nabis. I want a Surface Pro once the price falls a bit, can't see paying $1000 for what would really be simply a toy, though I expect I would use it quite frequently. I simply want my pc in an extremely portable form, Android/iOS and ARM aren't that.
 
I simply want my pc in an extremely portable form

Why? Desktop mode apps aren't usable on a smartphone/tablet. So much so, I wouldn't be surprised if MS even removes desktop mode from a future version of Windows for tablets, unless "convertable" sales take off (don't hold your breath).
 
Intel's process advantage is not going to be enough to prop up x86. Heck even with that advantage, ARM currently has 95%+ of the smartphone market. A sobering thought.

The main problem for ARM is that nobody needs ARM due to beign ARM. MIPS is already looking to take over parts of it. And x86 other parts.
 
HPC? you mean low power servers right

the primary reason they are jumping in bed with ARM is they cant afford R&D in the high end
 
It didn't seem to be all that great compared to x86 on any measurable level, at least with Macs using G3/G4/G5 vs. PCs. Hell, I tried to use a PowerMac G5 recently, with a Dual 2Ghz setup and 4GB of ram, and it ran like total ass on modern websites. An old 1.8Ghz Opteron I have runs better than it did, with 2GB of ram.

What a shockingly anecdotal measurement...
 
The reason they don't care about this compatibility on smartphones and tablets is because most of them already have a PC which they can fall back on for applications that need legacy compatibility. Very few people are getting all their work done on ARM devices. Most real-world businesses have at least some legacy applications that absolutely must run - whether they are standard x86 binaries, Office VBA automations (which don't run on RT), or weird old VBScripts or something else. This isn't going to change in the forseeable future. x86 isn't going anywhere; it will still be around 5, 10, probably 100 years from now.

Exactly. All the old shit floating around at the place I work. The whole company (several thousand employees) was forced on 32-bit win 7 because a couple of people still need to use 16-bit apps. there is so many legacy stuff around it will will last decades till it can be replaced.

And let's not forget that Smartphones and tablet are popular and use ARM. But all they actually do is make use of services of which probably > 95 % run on x86 hardware.
 
What a shockingly anecdotal measurement...

I know, how dare someone share personal experience 😀

It's probably not all the G5's fault, though I do remember it being incredibly overrated. IIRC AMD benched their AMD64 CPUs against the G5, and they won handily. That said, Apple themselves and the software community has basically abandoned the G3/G4/G5 based Macs. Even trying to browse the web is a PITA on a Power-based Apple product, simply because the software is hopelessly out of date.

Ah yes, here we go :

http://www.techhive.com/article/112749/article.html?page=8

http://www.ppcnux.com/?q=node/6552

I also find it funny that some of my Mac fan friends constantly slammed/insulted x86, then crapped their pants when they make the switch, and now pretend like x86 was always the best.
 
I also find it funny that some of my Mac fan friends constantly slammed/insulted x86, then crapped their pants when they make the switch, and now pretend like x86 was always the best.

To be fair a lot of that sentiment goes back to 286/368 days when there was 64KB memory segments, 640KB DOS limits and stuff like that some of which was IBM's fault, some Microsoft's but a lot of the blame for that mess was Intel's. Although some of it might just be Mac fanboyism of course...

My take on x86 is that the performance is there now despite of x86. By which I mean that because of all the resoruces which Intel (& AMD) threw at improving x86 over the years, x86 has become the performance leader. But if you were to go back to the mid 80s and look at what was available then (80286, 68020, MIPS, ARM, SPARC etc.) and predict which one would win the performance war, x86 would probably have been an outsider. Sure, if thinking in terms of 'money = resources to win the race', the fact that Intel had won the IBM PC contract might have been the most relevant factor.

Point being: if the same resources had been thrown at any of the alternatives to Intel (or even some Intel RISC design like their DSPs), I am sure that not only would those alternatives have the performance crown now but their performance might be higher since x86 was a poor place to start compared to the others with their flat memory model, lots of registers etc.
 
If you define "x86" as "capable of running Windows 7", no, and I think it could be as early as after Skymont when that happens. Once AMD implodes due to the debt, Intel could very easily continue to sell Skymont to the market indefinitely as long as people need PC replacements; while focusing their efforts on this new x86-sorta architecture, solely focused on perf/watt and nothing else.

What?

First off- no, that's not how you define "x86". x86 is the instruction set derived from the 8086 processor, which is used by Core, Atom, Silvermont Atom, Xeon Phi...

Second- Skymont isn't "x86-sorta". It's x86.
 
Nobody knows if x86 will remain as a niche or not, but the world is already changing to a new arch. Users are massively moving away from x86 to ARM and the trend will continue next year

phamchart.png


AMD will be substituting x86 servers (jaguar arch.) with ARM servers this year.

ARM is also starting to substitute x86 in the HPC. The plans to built the most powerful supercomputer in world use ARM instead of x86.

Only because Intel were slow to understand the importance of the ultra mobile market. As you can see with Haswell they are starting to move into that market and in a few years they will have a big slice of the tablet market imo.
 
First off- no, that's not how you define "x86". x86 is the instruction set derived from the 8086 processor, which is used by Core, Atom, Silvermont Atom, Xeon Phi...

Basically the idea is that Intel would take x86 and change things, remove instructions, etc, to the point where it wouldn't be able to run Windows 7. They could give it a new name, but they could still call it x86 given the lineage.
 
Basically the idea is that Intel would take x86 and change things, remove instructions, etc, to the point where it wouldn't be able to run Windows 7. They could give it a new name, but they could still call it x86 given the lineage.

Hmm. Potentially- this already happened with the Phi (no SSE1-4, no MMX, no AVX). I don't really see them shooting themselves in the foot like that though, certainly not in the mainstream cores. And certainly not in Skymont- that's just a shrink of Skylake.
 
Basically the idea is that Intel would take x86 and change things, remove instructions, etc, to the point where it wouldn't be able to run Windows 7. They could give it a new name, but they could still call it x86 given the lineage.
As NTMBK noted, Xeon Phi misses many features including 32-bit support and cmov on top of MMX and SSE. Intel calling it x86 is plain stupid. It's as if MS called Windows a system that can't run legacy apps. Hmm wait 😛
 
Hmm. Potentially- this already happened with the Phi (no SSE1-4, no MMX, no AVX). I don't really see them shooting themselves in the foot like that though, certainly not in the mainstream cores. And certainly not in Skymont- that's just a shrink of Skylake.

Basically this sorta-new architecture would be completely designed around perf/watt, so removing the cruft is going to be essential. And it's "After Skymont".
 
Basically this sorta-new architecture would be completely designed around perf/watt, so removing the cruft is going to be essential. And it's "After Skymont".

I doubt including support for the older instruction sets really adds much power usage. They're going to have big fat vector units for AVX3 anyway, so adding support for using 1/2, 1/4 or 1/8 of the vector to support MMX, SSE or AVX2 seems like a no brainer.

Besides, the mainstream line of cores are also used in the server parts- and ditching support for legacy software in hardware aimed at businesses is pretty suicidal.

I could see a future Atom ditching some older instructions, potentially, like the MMX extensions. (AMD already ditched their 3dNow! extension.) But the mainstream cores? Nah.
 
I doubt including support for the older instruction sets really adds much power usage. They're going to have big fat vector units for AVX3 anyway, so adding support for using 1/2, 1/4 or 1/8 of the vector to support MMX, SSE or AVX2 seems like a no brainer.

Besides, the mainstream line of cores are also used in the server parts- and ditching support for legacy software in hardware aimed at businesses is pretty suicidal.

I could see a future Atom ditching some older instructions, potentially, like the MMX extensions. (AMD already ditched their 3dNow! extension.) But the mainstream cores? Nah.

I doubt they ditch anything. Because you quickly end in problems. And the savings are close to non existant.

It was easy to remove 3Dnow! because it was not universal. MMX for example is universal.
 
HPC? you mean low power servers right

Is this addressed to me? If yes, my post was:

ARM is also starting to substitute x86 in the HPC. The plans to built the most powerful supercomputer in world use ARM instead of x86.

HPC means High Performance Computing. The most powerful supercomputer in the world will not use x86. The goal is to build a supercomputer was 10x more powerful than the Titan supercomputer

http://www.montblanc-project.eu/press-corner/news/new-eu-based-supercomputer-be-arm-based

A prototype will be ready this month.

Only because Intel were slow to understand the importance of the ultra mobile market. As you can see with Haswell they are starting to move into that market and in a few years they will have a big slice of the tablet market imo.

Intel lacks the technology and resources to compete with ARM ecosystem. For years Intel has predicted a renaissance of the traditional PC, but nobody heard them and the market continue to move away from x86.

Now Intel feels obligated to compete in the ultra mobile market but it cannot.

Check the thread about AnTutu. Intel tried to spread the news that its future mobile chips beats ARM, but benchmarks were cheated. Recent benchmarks show that future Intel chips are very far from current ARM designs.
 
To be fair a lot of that sentiment goes back to 286/368 days when there was 64KB memory segments, 640KB DOS limits and stuff like that some of which was IBM's fault, some Microsoft's but a lot of the blame for that mess was Intel's. Although some of it might just be Mac fanboyism of course...

My take on x86 is that the performance is there now despite of x86. By which I mean that because of all the resoruces which Intel (& AMD) threw at improving x86 over the years, x86 has become the performance leader. But if you were to go back to the mid 80s and look at what was available then (80286, 68020, MIPS, ARM, SPARC etc.) and predict which one would win the performance war, x86 would probably have been an outsider. Sure, if thinking in terms of 'money = resources to win the race', the fact that Intel had won the IBM PC contract might have been the most relevant factor.

Point being: if the same resources had been thrown at any of the alternatives to Intel (or even some Intel RISC design like their DSPs), I am sure that not only would those alternatives have the performance crown now but their performance might be higher since x86 was a poor place to start compared to the others with their flat memory model, lots of registers etc.

X86 was weak up until the Pentium Pro's release in what, 1996? So yes, x86 sucked in the 80s and early 90s.
X86 was designed for a world where the cost of memory far outstripped everything else. It is still the most memory efficient design on the common market, even though it's not a concern anymore.
Most RISC processors were designed for a world where die size was expensive. This is also no longer true, but ARM processors can generally do more with less die size and thus go into the lowest cost devices.
ARM co-opted the low power market, and they have some innate advantage there. X86 has some innate advantages in the high performance market.

X86 needed more research/money to gain some of the performance features innate to RISC designs, and because of its inefficiency in die space, it took longer for important hardware components to be integrated on die.
That said, despite dumping out of order execution, the Intel Atom still performs up there with the best performing RISC designs that lack OOE as well, and competes with low end OOE designs (ie, ARM and AMD's brazos).

Most of the burden of writing x86 code was absorbed into compilers. Thankfully, decades of research and a decoupling of the logical representation from the machine code representation has made x86 compilers quite good.
 
x86 will never die.

it will be emulated on quantum computers when it gets replaced.

Think of PS1-> PS2.... quantum PC's should be backwards compatible.

And everyone should be on one in the next 20 yrs. (<--- a guess in the industry)
 
Last edited:
Point being: if the same resources had been thrown at any of the alternatives to Intel (or even some Intel RISC design like their DSPs), I am sure that not only would those alternatives have the performance crown now but their performance might be higher since x86 was a poor place to start compared to the others with their flat memory model, lots of registers etc.

I disagree. Look what AMD was able to accomplish with a fraction of Intel's and IBM's budget back then. AMD's K8 was still faster than the PowerPC 970.
 
I disagree. Look what AMD was able to accomplish with a fraction of Intel's and IBM's budget back then. AMD's K8 was still faster than the PowerPC 970.

Well, my point was that x86's baggage makes things harder. If the K8 team had been able to continue the Alpha project (most of them were ex-Alpha) who knows what they might have achieved. Actually, the problem with historical musings is that they are all 'what ifs'...

EDIT: make that the K7 team; just checked the wiki
 
Last edited:
Intel lacks the technology and resources to compete with ARM ecosystem. For years Intel has predicted a renaissance of the traditional PC, but nobody heard them and the market continue to move away from x86.

Now Intel feels obligated to compete in the ultra mobile market but it cannot.

Check the thread about AnTutu. Intel tried to spread the news that its future mobile chips beats ARM, but benchmarks were cheated. Recent benchmarks show that future Intel chips are very far from current ARM designs.

LOL

I really can't believe that you type stuff like that without intentionally trolling.

Intel is a giant ship that takes a while to change directions. However, technology and resources are exactly what Intel has almost infinite supplies of. Their annual R&D budget alone is far larger than the entire operating revenues of most of their competitors. Taking it further, once you combine the staggering supremacy of Intel's fabs, it is not a question of 'if' Intel overtakes ARM, but 'when'. The ship is fully turned in that direction now, and every successive product release that Intel aims at that target will get vastly better. Once we're talking about 10nm chips (already on the maps!), combined with vastly improved designs, the game is pretty much over. At the rate we're going, TSMC/GF will be struggling with 22nm or maybe 14nm if they're really lucky. And then you get the issues of having no in-house fabs for the engineers to work with. Successive 'spins' take far longer with outsourced fabs.

Of course you are one to say continually trollish things like : 9590 is better than 4770K! Haswell sucks! PS4 is better than 100 Titans! Etc.

If nothing else, I guess you supply some comedy to the forums.
 
Point being: if the same resources had been thrown at any of the alternatives to Intel (or even some Intel RISC design like their DSPs), I am sure that not only would those alternatives have the performance crown now but their performance might be higher since x86 was a poor place to start compared to the others with their flat memory model, lots of registers etc.

The x86 decoder is a very small part of most modern CPUs, and becomes less and less of a significant overhead as time goes by. Look at any CPU die shot and you'll see that other parts of the chip (especially on-die cache) take up far more space.

Furthermore, the complex x86 instruction set is not without its advantages; since each opcode does more than on RISC, the code is more tightly packed and the decoder's bandwidth is maximized. Having a x86 front-end feeding into an underlying RISC core is what most modern x86 chips do. You get the best aspects of both.

As to the low register count, that issue is almost completely solved by 64-bit mode, which has 16 general-purpose registers plus 16 SSE registers. That compares fairly well with most RISC designs. (Increasing from 16 to 32 registers would probably offer only a marginal improvement.)
 
Back
Top