Originally posted by: Sohcan
The PowerPC uses much lower mhz which equates in a much lower operating temperature and yet seems to get more bang for the mhz buck. I would imagine that a 3.08ghz RISC processor would smoke the current CISC P4?
One cannot focus only on the instuction set style to make such comparisons; it ignores the internal microarchitectural implementation. The equation for performance is:
Time/program = cycles/instruction * seconds/cycle * instructions/program
Cycles/instruction (CPI, the inverse of IPC) is dictated by microarchitecture (organization) and instruction set, seconds/cycle (inverse of clock rate) by the microarchitecture and the implementation (circuit and physical design), and the instructions/program (instruction count) by the software and the instruction set.
But the important thing to note is that the instruction set has a second-order influence on cycles/instruction; the internal organization is far more important. The Athlon and P4 share the same basic microarchitecture as most high-performance RISC CPUs, namely dynamically scheduled superscalar, in which multiple instructions can issue each cycle out of program order. The Athlon, P3, P4, and G4 all issue/retire 3 instructions/cycle, and most server-class RISC CPUs issue and retire 4 or 5 instructions/cycle. The Athlon and P4 both decode x86 instructions into smaller RISC operations (typically one arithmetic operation and one memory operation) to faciliate pipelining. Thus the main effects of using the legacy x86 ISA is more engineering difficulty in decoding x86 instructions and tracking instructions and their condition codes down the pipeline, as well as some performance loss due to x86's fewer logical registers (8 vs. 32 in classic RISC architectures). The restriction in the number of registers makes x86 CPUs more dependent on the memory subsystem and can restrict some types of code formation by compilers.
I would imagine that a 3.08ghz RISC processor would smoke the current CISC P4?
Depends on the processor. A 3 GHz Alpha EV7 would definitely be very fast compared to a 3 GHz P4, but one must look at the other differences as well. The EV7 has a far more robust microarchitecture: it has a higher issue and retire rate, a far more advanced branch predictor, and more aggressive scheduling (including the ability to issue loads and store out-of-order. It also has a more advanced memory system, with larger and higher bandwidth L1 and L2 caches and a much higher bandwidth main memory system.
So while the Athlon and P4 generally sustain fewer instructions/cycle than many server-class RISC CPUs (though the Athlon is certainly comparable to the G4), they have the advantage of a higher clock rate. This is due to a number of reasons; there is the obvious pipeline length factor, though the IBM POWER4 has an integer pipe that is longer than the Athlon's. Generally speaking x86 CPUs achieve a higher clock rate due to their volume and target market. A higher volume chip can achieve better speed bins during manufacturing. And because they are produced in such a high volume, they generally move to finer manufacturing process technologies before server-class CPUs, which are often produced on older, tried-and-true manufacturing processes in order to maximize yield given their large die size. For example, the P4 is moving to a 90nm process node at the end of this year, while the Alpha EV7 is being moved to a 130nm process next year.
Thus, because the increasing number of transistors per chip allowed x86 CPUs to adopt advanced microarchitectures, and because of their volume manufacturing, x86 CPUs have generally caught up to server-class RISC performance in many respects. Just check out the position of the Athlon and P4 in
SPECint 2K and
SPECfp 2K (used as a cross-platform benchmark to test integer and floating-point workstation-like performance).
One thing I have never learned is why does intel and other PC maufacturers use the CISC based processors and not RISC based like DEC and PowerPC?
Frankly because this industry loves backwards compatibility. It should be noted that Intel tried twice before to get rid of x86, not including IA-64. x86 was originally a stop-gap measure put out by Intel because of the numerous delays of the iAPX 432, a so-called "super-CISC" chip that directly executed high-level object-oriented code. It was over five years late to the market, and performed very poorly. Their second attempt was the i860 in the late 80s, a RISC CPU that had some very interesting features, including a VLIW-like dual-instruction issue mode. Though it achieved success as a microcontroller and in the Paragon supercomputer, it flopped as a general-purpose desktop and workstation microprocessor.
* not speaking for Intel Corp. *