i've asked this before but no one's answered...

jhu

Lifer
Oct 10, 1999
11,918
9
81
why don't intel and amd put in a mode whereby their respective risc instructions are fetched and executed directly instead of having to translate x86 instructions? i remeber one of the earlier processors (i think it was nexgen's 5x86) that was able to have this dual mode processor operation: decode x86 instructions or execute risc operations directly. with the rise of free operating systems, it'd just be a matter of recompiling the os
 

Xalista

Member
May 30, 2001
113
0
0
I think this is because the RISC-like instructions that the CISC instrucitions are translated into are not directly comparable to a real RISC instruction set. So it would be pointless to offer these RISC-like instructions as an alternative ISA because it is not a fully functional ISA. It seems that you are under the impression that modern x86 processors are just emulating the x86 ISA on a RISC processor, but this is not the case. The CISC instructions are translated into socalled micro-ops, but these micro-ops do not make up a full RISC ISA.

Not sure if this is true though, it is just how I think it works.
 

Sohcan

Platinum Member
Oct 10, 1999
2,127
0
0
First off, AFAIK neither AMD nor Intel are willing to release exactly how they do their respective x86 -> RISC micro-op decoding. I believe Paul DeMone once mentioned that the micro-op decoding process is likely one of the major sources of the critical path length in x86 processors (hence Intel's decision to have a trace cache on the P4)....it likely involved lots of high fan-in logic and high fan-out signals, all of which has to be propogated from the first to the second stage (or even third stage) of the decoding process. AMD and Intel doesn't really want to give up the details on their implementations.

Secondly, it would be a lot more work than merely recompiling the OS. The instruction set implementation would be radically different than x86....mean new compilers with drastically different methods of code optimization, in the form of loop unrolling, strength reduction, peephole optimization, etc. This is no easy task, and it would probably take a few generations of compilers to work out the optimizations for a new instruction set. After all, Intel has been concentrating on compiler development for IA64 for years, and research will likely continue for a long time (granted, IA64's VLIW is much different than OOOE superscalar, and relies more on compiler development). Also, AMD has never been involved in compiler development within their own company or in cooperation with others (a mistake, IMHO), so it would require a new model of CPU development. The Athlon was similar enough to the P3 in terms pipeline organization and instruction latency that it could benefit from the established base of P3-optimized code...the Athlon is somewhat like a wider, bigger, version of the P3, so that it could execute P3-optimized code better than the P3 could. As an aside, IMHO AMD better get rolling on in-house compiler development for the Hammer, if they want their x86-64 mode software to be well-optimized....and getting x86-64 support for Microsoft's compilers will be key if they want wide-spread support.

Then, once you have the compilers developed and well-optimized, it's still an issue to get the applications and OSs recompiled for an x86 RISC mode. Given x86's huge user base, this means that applications will have to be released with both x86 and x86-RISC support for a long time. All of this development for a new ISA would probably be a big expense for Intel, AMD, and the software developers....given that Intel and AMD are already far into the development of their new respective ISAs (IA-64 and x86-64), I really think it would be too big of an expense for the benefit of simpler decoding.
 

jhu

Lifer
Oct 10, 1999
11,918
9
81
like i said, nexgen did do that with their cpu during the "pentium wars" in the mid-'90s. it seemed like a nice option even if only a few used it.