Why did RISC computing die?

UNCjigga

Lifer
Dec 12, 2000
25,281
9,782
136
Okay let me preface this post by saying I know next to nothing about how microprocessors work. I know that they use millions of tiny transistors to form gates and logic switches and all that, but I don't know much about the underlying architectures, assembly code or any of that. My understanding of RISC, back in the late 80s to early 90s, was that RISC was inherently 'better' than CISC. I guess the argument went like this: the more logic you add to a processor, the more complex its execution pipeline becomes. By using a reduced instruction set, your processors can be more efficient and thus process instructions much, much faster. This is why RISC was favored for supercomputers, enterprise servers and high-performance workstations.

But then something happened. From the late-90s through the beginning of this century, x86 saw a resurgence in the marketplace. Pentium II and Pentium III finally proved that CISC/x86 could work with out-of-order execution, branch prediction and native 32-bit code just as efficiently as RISC. The eventual failure of Itanium vs. Xeon, HP's move to Opteron, and Sun's move to Opteron kinda completes the picture. Even the Power processor adopted a more CISC-like functionality with Power4/Power5, and Apple still dropped them in favor of x86. So what happened to RISC? Why aren't we all using RISC chips these days to save power and be hyper efficient?
 

AthlonPowers

Member
Jul 30, 2005
56
0
0
I believe that there were numerous factors in the shrinking of the ideal, but one of the largest must have been the ability of Intel to manufacture chips with performance that approached that of the RISC chips produced by (typically) the big box UNIX vendors. These other vendors also didn't have the benefit of such economies of scale as Intel did, and designing and producing their own chips ran into hundreds of millions of $$$. You also have to consider the markets - Intel succeeded by selling chips to producers of the corporate PC (among other things), while the big box vendors were selling much more expensive servers and workstations. Around the same time, the PC was becoming increasingly popular with everyday folk - an overall much larger market, whereas the expensive servers and workstations became increasingly marginalized over the years as the bang for buck of the Intel-based machines proved too tempting for most.
 

aka1nas

Diamond Member
Aug 30, 2001
4,335
1
0
RISC made sense when hardware was relatively expensive compared to the cost of software. RISC design philosophies essentially shift some of the burden off the chip design (simpler chips) to software (more sophisticated compilers to produce good machine code). Economies of scale and advances in technologies have let CISC designs keep up and outlast the RISC revolution. Modern x86 CPUs are essentially RISC/CISC hybrids as they have an x86(CISC) front end that decodes x86 instructions into simpler micro-ops that are internally executed very quickly(RISC like behavior).
 

sandorski

No Lifer
Oct 10, 1999
70,655
6,222
126
My understanding is that RISC isn't dead, at least no more dead than CISC. They have merely merged into a Hybrid setup common in todays x86 processors.
 

dwcal

Senior member
Jul 21, 2004
765
0
0
Originally posted by: aka1nas
Modern x86 CPUs are essentially RISC/CISC hybrids as they have an x86(CISC) front end that decodes x86 instructions into simpler micro-ops that are internally executed very quickly(RISC like behavior).

This part is the key. Over time the x86 decoder became less of a disadvantage because die shrinks let them fit more transistors total onto the chip die and the x86 decoder could occupy a smaller fraction of the space on the chip.
 

Vee

Senior member
Jun 18, 2004
689
0
0
Because RISC ran out of steam?

But RISC is also not completely dead, ...yet.
If you do a more modest, small and simple low performance core, there are clear advantages for a RISC type architecture. Thus Cell and Arm.

The original concept of RISC, which also gave the name, was to eliminate much microcoding and to be able to "afford" various supercomputer technologies, mainly more registers and pipelining, later superscalar execution, on the transistor budget, by designing the instruction set for that very purpose. "Reduced instruction set computing". Primarily instructions refering to operands with complex addressing were eliminated. RISC loads and stores more explicitly.

Another RISC principle is that the compiler takes a lot of responsibility for how code is executed. That is how all the extra visable registers are intended to be used for increased efficiency.
In a RISC CPU, the hardware design takes precedence, and the ISA is designed from that.

That is what RISC really is. In a CISC CPU, it's the other way around, the ISA is first designed, then the hardware is designed to implement it.

Now: Considering that the hardware designs have grown from some thirty thousands transistors to hundreds of millions since then, - It should be obvious that there might be some reasons why RISC hasn't proved to be such a sustainable concept.

When the CISC CPU also plays, if not exactly the same but still a similar, supercomputing hardware game as RISC, what are you left with?

The RISC ISA is ultimately inherently 'worse', slower. A RISC CPU may be forced to use its superscalarity and/or use more cycles, and rely on the compiler to figure it all out, - to perform the same work that a CISC CPU will simply put on different stages in the pipeline, and thus accomplish with a throughput rate of one cycle, one pipeline.


--------

There is a completely different aspect that I haven't touched sofar. The world of software users always want x86, so they can continue to run their software. This has meant that there is a much larger volume in x86. This in turn means that there are more development resources available for x86, and more advanced production technologies. A x86 CPU can be much larger and higher clocked than a similarily priced RISC CPU.

I don't think it makes any difference in the end regarding CISC vs RISC. But CISC wouldn't have been developed as it has without it. Without x86's volume, CISC would likely have been abandoned. But there would have been no sustained gains from that.

In a CISC cpu the hardware logic is more decoupled from the ISA and compiler than it can be allowed to be in a RISC design. I'm tempted to see that as an advantage in the long run.
 

carlosd

Senior member
Aug 3, 2004
782
0
0
Actual x86 CPUs are POST RISC CPUS, with an x86 ISA fron end. To execute a x86 instruction actual CPUs performs most of the time a few fixed lenght instructions called micro operations which are basically RISC instructions.
 

Vee

Senior member
Jun 18, 2004
689
0
0
Originally posted by: carlosd
Actual x86 CPUs are POST RISC CPUS, with an x86 ISA fron end. To execute a x86 instruction actual CPUs performs most of the time a few fixed lenght instructions called micro operations which are basically RISC instructions.

CISC instructions have always been executed as micro operations. They used to be microcoded in the early, small CPUs. Was the 8080 a "POST RISC CPU"?

No, it's a popular analogy, but the basic content of it is that CISC today employ some of the same hardware technologies that RISC was set out to achieve.
And those techs are employed all the way. Decoding too, the so called "CISC front end" is superscalar and pipelined.