What are the limitations of the x86 ISA anyhow?

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
The x86 ISA is always said to be outdated, hard to use, and limits performance. Yet PC cpus are considered easier to program for than console/handheld (Pocket PC, phones) cpus.
For that matter, the x86 ISA has been frequently updated with things like SSE and the like.
And as for performance....current x86 cpus generally beat all competitors in performance. I believe Apple's G5 cpus are blown away in integer performance, sometimes blown away in floating point, and about on par with an Opteron in vector performance.
Is cost higher? I'm not sure how the transistor budget of modern x86 cpus compares to competitors, maybe x86 cpus are only better performing because they feature 3x the transistors?
And just how bad is the ISA? Supposendly it's not perfect, but the main limitation I've heard is variable data chunk sizes(from 1 bit to 32 bit wasn't it?), but that also helps reduce the amount of cache and memory needed. For that matter, supposendly the PPC architecture has a nasty performance hit when switching between integer and floating point, though I could imagine that could be something related to preventing virii and other exploitabilities.
 

Zoomer

Senior member
Dec 1, 1999
257
0
76
Insufficient registers, blah blah blah.
It performs good because of all the hacks and extensions made to it.

This ISA was made to conserve memory, so performance of another ISA that's not might be better.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Originally posted by: Zoomer
Insufficient registers, blah blah blah.
It performs good because of all the hacks and extensions made to it.

This ISA was made to conserve memory, so performance of another ISA that's not might be better.

But what would the actual difference be? Any examples of something that performs better due to a superior ISA?

For that matter, it has 8 32-bit registers, while I think the G5 has 128 8-bit registers, so x86 may not be that bad off in registers.
 

dmens

Platinum Member
Mar 18, 2005
2,275
965
136
I don't think x86 is that bad. Many of the weaknesses can be addressed behind the scenes in hardware and remains completely transparent to the programmer. Which actually makes it pretty easy to use....
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Originally posted by: dmens
I don't think x86 is that bad. Many of the weaknesses can be addressed behind the scenes in hardware and remains completely transparent to the programmer. Which actually makes it pretty easy to use....

Well, if all the weaknesses can be addressed, that means they have a transistor cost. If it takes 50% more transistors to produce a competitive x86 cpu, then that seems like a bad deal, if it takes <1%, then it doesn't really matter at all.
 

dmens

Platinum Member
Mar 18, 2005
2,275
965
136
Yes, the frontend decoding and sequencing requires significant hardware and a few pipestages. But that allows x86 to be translated into any proprietary microcode implementation. That means as your design environment changes, one only has to change the microcode, leaving the high level ISA intact and preserving backwards compatibility. Using x86 as a high level definition is what allowed these drawbacks to be addressed transparently in hardware. Most of the cost exists in the frontend, making the backend infinitely flexible.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Originally posted by: dmens
Yes, the frontend decoding and sequencing requires significant hardware and a few pipestages. But that allows x86 to be translated into any proprietary microcode implementation. That means as your design environment changes, one only has to change the microcode, leaving the high level ISA intact and preserving backwards compatibility. Using x86 as a high level definition is what allowed these drawbacks to be addressed transparently in hardware. Most of the cost exists in the frontend, making the backend infinitely flexible.

So how does it compare to using a more foward thinking ISA?
 

icarus4586

Senior member
Jun 10, 2004
219
0
0
Recently (~1990 to present) RISC designs have been seen as better than CISC. RISC programs tend to take more memory, since more instructions are needed to accomplish the same thing, and since each instruction takes the same amount of memory space. However, recently memory hasn't been as much of an issue as it was before. RISC CPUs can dedicate more transistors to performance, since they don't need to do as much work in the decoding stages.

What's all this mean for x86 today? Not all too much, actually. In the Pentium, it was estimated that somewhere around 10-20% of its transistors were dedicated to decoding instructions. On the Pentium 4, the same transistors make up about 2% of the CPU.
That means that, today, the penalty for using a CISC instruction set is very small.

X86 is quirky though. It's not as easy to program in as other CISC architectures, because it doesn't have orthogonality. Most recent RISC architectures are (mostly) orthogonal, so it's actually easier to write assembly language programs for them than it is for x86. It feels more like an old, cluttered ISA, while recent RISC ISAs feel clean, if a little sparse.

None of this matters much though, since compilers take care of that stuff. What's important now is that the penalty for implementing x86 is pretty small. The backend of today's x86 CPUs are actually pretty RISC like. They're just hiding behind the decoder and microcode engine. It's pretty safe to say that, today, the RISC vs. CISC argument is almost irrelevant.

(edit) Also, recent additions like SSE and AMD64 have added registers and functionality the even make the x86 ISA feel more like RISC to program in.
 

dmens

Platinum Member
Mar 18, 2005
2,275
965
136
imo the biggest weakness is the lack of defined sw/hw interactions. x86 implementations have used just about every hardware trick possible to optimize code at runtime, but with software predicates it can get a lot better... IA-64 for example.

When compared against other ISA's with similar scope, it does fine or better... probably because there is a lot of experience implementing x86 procs.
 

borealiss

Senior member
Jun 23, 2000
913
0
0
x86 is awesome because of 1 aspect, legacy support. programs that ran on a 8086 are still validated for today's x86 cpu's.
 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
x86 is mainstream for 2 reasons:

A. It became dominant, so to switch Micro Architectures you would have to:
I. Write an emulator for all x86 programs to run on the new MA
II. Convert all future programs to a new MA
(cont) So in order to switch it would be VERY VERY costly and time consuming. Not to mention the length of time.

B. x86 is meant to be efficient and simplistic/cost effective as are most Super Scalar Architectures.

C. x86 can be changed as needed by adding extensions to its code. -64, SSE(2)(3), 3dNow(+)(Professional)

To give an idea of the amount of power it would take to move from one Instruction Set to another lets look at everyones good friend Intel. A marketing and computer behemoth. They forced many technologies upon the retail world, just because they are that large. A couple of years ago, Intel attempted to change IS. They had created the EPIC (Explicitly Parallel Instruction Computing) IS which is used on their Itaniums. Intel the largest micro processor producer in the world (by far, especially at that time) was stopped dead in its tracks. The market simply wouldn't budge. Manf. refused it and wouldn't have anything to do with it.
Now would EPIC have been better? In some areas, absolutely! But like anything man made, it has it Pros and Cons. The one major Con was switching the entire industry to a new IS.

-Kevin
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Originally posted by: Gamingphreek
x86 is mainstream for 2 reasons:

A. It became dominant, so to switch Micro Architectures you would have to:
I. Write an emulator for all x86 programs to run on the new MA
II. Convert all future programs to a new MA
(cont) So in order to switch it would be VERY VERY costly and time consuming. Not to mention the length of time.

B. x86 is meant to be efficient and simplistic/cost effective as are most Super Scalar Architectures.

C. x86 can be changed as needed by adding extensions to its code. -64, SSE(2)(3), 3dNow(+)(Professional)

To give an idea of the amount of power it would take to move from one Instruction Set to another lets look at everyones good friend Intel. A marketing and computer behemoth. They forced many technologies upon the retail world, just because they are that large. A couple of years ago, Intel attempted to change IS. They had created the EPIC (Explicitly Parallel Instruction Computing) IS which is used on their Itaniums. Intel the largest micro processor producer in the world (by far, especially at that time) was stopped dead in its tracks. The market simply wouldn't budge. Manf. refused it and wouldn't have anything to do with it.
Now would EPIC have been better? In some areas, absolutely! But like anything man made, it has it Pros and Cons. The one major Con was switching the entire industry to a new IS.

-Kevin

But how would a P4 with the x86 frontend compare to the same core with a different front end? Would it make a significant difference? Could more be done? For instance, it's often said that x86 limits the floating point and vector performance of chips, and though the opteron proves this wrong in real world circumstances (Intel's chips seem to fall way behind in FP unless they use SSE, and I don't think SSE helps vector performance.....actually what is vector performance? isn't that like processing 4 scalar 32-bit pieces of data at once instead of 1 128 bit? anyhow, not like there's many full fledged chips to compare to, IBM's top of the line chips are in a price range far removed from Intel's and AMD's, and I doubt their Power 4 line sees much development now that Power 5 is out), how much improvement could be seen from ditching x86? 1% performance increase? 10%? 50%? Or would it just result in a reduction of transistors or power consumption?
 

dmens

Platinum Member
Mar 18, 2005
2,275
965
136
Hard to tell. The front/backends need to be sync'ed for good throughput. Also, if x86 is ditched, and you'd have to redesign the whole damn thing because of lots of things... can't just swap the front due to various implementation issues.

Vector perf is the SIMD processing which is SSE.
 

itachi

Senior member
Aug 17, 2004
390
0
0
Originally posted by: dmens
Hard to tell. The front/backends need to be sync'ed for good throughput. Also, if x86 is ditched, and you'd have to redesign the whole damn thing because of lots of things... can't just swap the front due to various implementation issues.

Vector perf is the SIMD processing which is SSE.
agreed..

it's not the x86 architecture that limits fp performance, it's the x87 and it's stack-based registers. sse and above fp instructions all use a flat model.
and the p4's fpu sucks for reasons not attribtued to x86.

imo the biggest weakness is the lack of defined sw/hw interactions. x86 implementations have used just about every hardware trick possible to optimize code at runtime, but with software predicates it can get a lot better... IA-64 for example.
using software requires a lot more memory, system and cache (= money). it also brings up a problem with binary compatibility- which .net should help with.
When compared against other ISA's with similar scope, it does fine or better... probably because there is a lot of experience implementing x86 procs.
the concepts applied to an x86 cpu isn't applicable to it only.. and aren't all part of the definition (x86 doesn't specify a pipeline, superscalar arch, branch predictor, or any other performance feature).
 

dmens

Platinum Member
Mar 18, 2005
2,275
965
136
not ISA vs ISA, rather uproc vs uproc... im saying x86 uprocs do better probably because of design experience.
 

cmv

Diamond Member
Oct 10, 1999
3,490
0
76
Originally posted by: Gamingphreek
x86 is mainstream for 2 reasons:

A. It became dominant, so to switch Micro Architectures you would have to:
I. Write an emulator for all x86 programs to run on the new MA
II. Convert all future programs to a new MA
(cont) So in order to switch it would be VERY VERY costly and time consuming. Not to mention the length of time.

B. x86 is meant to be efficient and simplistic/cost effective as are most Super Scalar Architectures.

C. x86 can be changed as needed by adding extensions to its code. -64, SSE(2)(3), 3dNow(+)(Professional)

To give an idea of the amount of power it would take to move from one Instruction Set to another lets look at everyones good friend Intel. A marketing and computer behemoth. They forced many technologies upon the retail world, just because they are that large. A couple of years ago, Intel attempted to change IS. They had created the EPIC (Explicitly Parallel Instruction Computing) IS which is used on their Itaniums. Intel the largest micro processor producer in the world (by far, especially at that time) was stopped dead in its tracks. The market simply wouldn't budge. Manf. refused it and wouldn't have anything to do with it.
Now would EPIC have been better? In some areas, absolutely! But like anything man made, it has it Pros and Cons. The one major Con was switching the entire industry to a new IS.

-Kevin

From the other side of the glass the main reason Intel didn't successfully move everyone to Itanium was because the platform was too slow when running code written for the prior ISA. If it had been fast and cheap enough there would have been little resistance.
 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
Originally posted by: cmv
Originally posted by: Gamingphreek
x86 is mainstream for 2 reasons:

A. It became dominant, so to switch Micro Architectures you would have to:
I. Write an emulator for all x86 programs to run on the new MA
II. Convert all future programs to a new MA
(cont) So in order to switch it would be VERY VERY costly and time consuming. Not to mention the length of time.

B. x86 is meant to be efficient and simplistic/cost effective as are most Super Scalar Architectures.

C. x86 can be changed as needed by adding extensions to its code. -64, SSE(2)(3), 3dNow(+)(Professional)

To give an idea of the amount of power it would take to move from one Instruction Set to another lets look at everyones good friend Intel. A marketing and computer behemoth. They forced many technologies upon the retail world, just because they are that large. A couple of years ago, Intel attempted to change IS. They had created the EPIC (Explicitly Parallel Instruction Computing) IS which is used on their Itaniums. Intel the largest micro processor producer in the world (by far, especially at that time) was stopped dead in its tracks. The market simply wouldn't budge. Manf. refused it and wouldn't have anything to do with it.
Now would EPIC have been better? In some areas, absolutely! But like anything man made, it has it Pros and Cons. The one major Con was switching the entire industry to a new IS.

-Kevin

From the other side of the glass the main reason Intel didn't successfully move everyone to Itanium was because the platform was too slow when running code written for the prior ISA. If it had been fast and cheap enough there would have been little resistance.

Very true. The emulator written to use the x86 IS is extremely slow on the Itanium/EPIC (IS)

-Kevin
 

Calin

Diamond Member
Apr 9, 2001
3,112
0
0
Originally posted by: Fox5
Originally posted by: Zoomer
Insufficient registers, blah blah blah.
It performs good because of all the hacks and extensions made to it.

This ISA was made to conserve memory, so performance of another ISA that's not might be better.

But what would the actual difference be? Any examples of something that performs better due to a superior ISA?

For that matter, it has 8 32-bit registers, while I think the G5 has 128 8-bit registers, so x86 may not be that bad off in registers.

Yes, Itanium performs better (clock per clock) than x86. However, you should take into account that the executable files are typically twice as large in size (sometime three times larger) than the ones compiled for x86 ISA (from the same source code and with a similar compiler).
(I'm not arguing that EPIC is supperior to x86 or Itanium is supperior to Pentiums and Athlons - just that is different).
 

Calin

Diamond Member
Apr 9, 2001
3,112
0
0
Originally posted by: Gamingphreek
x86 is mainstream for 2 reasons:

A. It became dominant, so to switch Micro Architectures you would have to:
I. Write an emulator for all x86 programs to run on the new MA
II. Convert all future programs to a new MA
(cont) So in order to switch it would be VERY VERY costly and time consuming. Not to mention the length of time.

B. x86 is meant to be efficient and simplistic/cost effective as are most Super Scalar Architectures.

C. x86 can be changed as needed by adding extensions to its code. -64, SSE(2)(3), 3dNow(+)(Professional)

To give an idea of the amount of power it would take to move from one Instruction Set to another lets look at everyones good friend Intel. A marketing and computer behemoth. They forced many technologies upon the retail world, just because they are that large. A couple of years ago, Intel attempted to change IS. They had created the EPIC (Explicitly Parallel Instruction Computing) IS which is used on their Itaniums. Intel the largest micro processor producer in the world (by far, especially at that time) was stopped dead in its tracks. The market simply wouldn't budge. Manf. refused it and wouldn't have anything to do with it.
Now would EPIC have been better? In some areas, absolutely! But like anything man made, it has it Pros and Cons. The one major Con was switching the entire industry to a new IS.

-Kevin

The movement to something different is like a snowball effect. If the "critical mass" is reached, everything moves faster and faster. If not, everything moves slower and slower.
The problems Itanium faced were:
* hardware price (new servers were more expensive than similar powered x86)
* extra RAM needed (let's say two times more, depending on workload)
* lower performance when running x86 code
If hardware price would have gone down significantly, then maybe they would have been chosen instead of x86. This might have made software vendors to write programs for them, improving performance. Improved performance would bring extra desirability, and so on.
Apple was able to make the switch from Motorola to PowerPC - but their clients had nowhere to go. Legacy software (sometime of great financial value) would run faster only on the newer, non-Motorola Apple computers.
It's simply a lockup of everyone on the x86 ISA. Moving programs to different microarchitectures could be expensive and sometime impossible, while emulated performance might not be enough. Cracking this lockup is impossible
 

Calin

Diamond Member
Apr 9, 2001
3,112
0
0
Originally posted by: Fox5
Originally posted by: dmens
Yes, the frontend decoding and sequencing requires significant hardware and a few pipestages. But that allows x86 to be translated into any proprietary microcode implementation. That means as your design environment changes, one only has to change the microcode, leaving the high level ISA intact and preserving backwards compatibility. Using x86 as a high level definition is what allowed these drawbacks to be addressed transparently in hardware. Most of the cost exists in the frontend, making the backend infinitely flexible.

So how does it compare to using a more foward thinking ISA?

The x86 ISA compare badly in power use with other ISA/processor architecture (Intel XScale, AMD Geode and others also). While performance of even Pentium M is much better than that of Geode or XScale, the power consumption is much higher than the performance ratio would hint at.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
The x86 ISA compare badly in power use with other ISA/processor architecture (Intel XScale, AMD Geode and others also). While performance of even Pentium M is much better than that of Geode or XScale, the power consumption is much higher than the performance ratio would hint at.
Geodes are x86.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Originally posted by: Calin
Originally posted by: Fox5
Originally posted by: Zoomer
Insufficient registers, blah blah blah.
It performs good because of all the hacks and extensions made to it.

This ISA was made to conserve memory, so performance of another ISA that's not might be better.

But what would the actual difference be? Any examples of something that performs better due to a superior ISA?

For that matter, it has 8 32-bit registers, while I think the G5 has 128 8-bit registers, so x86 may not be that bad off in registers.

Yes, Itanium performs better (clock per clock) than x86. However, you should take into account that the executable files are typically twice as large in size (sometime three times larger) than the ones compiled for x86 ISA (from the same source code and with a similar compiler).
(I'm not arguing that EPIC is supperior to x86 or Itanium is supperior to Pentiums and Athlons - just that is different).

Isn't the production cost of an Itanium cpu also much higher? If it costs as much as a quad core, than that performance per clock doesn't matter as much.

The x86 ISA compare badly in power use with other ISA/processor architecture (Intel XScale, AMD Geode and others also).

I thought Geode was x86? Well, that would explain why the very low end versions run the PocketPC OS, but I was only aware of two Geodes, a Cyrix derived one and a rebadged Athlon XP.
And you can't compare the power consumption of a far worse performing chip to one with higher performance and power consumption since power consumption doesn't scale linearly.