CISC in modern manufacturing

Olikan

Platinum Member
Sep 23, 2011
2,023
275
126
In the past the RISC designs became the king, since it could get higher clocks at lower power comsumption...

but time have passed, manufactoring have improved ALOT since that time....and the furter the node shrink goes...the hit on the "4Ghz power wall" even gets stronger (yes, oversimplification)

since the clock speed/watt advantage is evaporating, wouldn't be wise to go back to CISC?
 

sefsefsefsef

Senior member
Jun 21, 2007
218
1
71
Intel and AMD do both CISC and RISC in the same CPU. x86 is CISC. CISC never went away.

Are you suggesting trying to do true Out-of-Order (OoO) CISC? The Pentium Pro introduced OoO to the CISC world by first translating the CISC instructions to RISC-like micro-ops, which are much easier to handle in an OoO pipeline.
 

Olikan

Platinum Member
Sep 23, 2011
2,023
275
126
Intel and AMD do both CISC and RISC in the same CPU. x86 is CISC. CISC never went away.

Are you suggesting trying to do true Out-of-Order (OoO) CISC? The Pentium Pro introduced OoO to the CISC world by first translating the CISC instructions to RISC-like micro-ops, which are much easier to handle in an OoO pipeline.

a true CISC... back to pre-Pentium Pro days, without the need of translating
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
There's no hard boundary between RISC and CISC. How processors decide to crack instructions and where the cracking (and rejoining) is performed varies a lot between different uarchs. For instance, on modern Core series processors a fused uop keeps load + op as one operation for most of the pipeline, only splitting it into unfused uops when it's ready to be sent to the execution units. This goes a well beyond a naive front-end translation of "CISC" to "RISC."

Another example is Atom, which has a full read-modify-write pipeline.

Ultimately the uarch design should do whatever fits best for the underlying power, area, and performance targets.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Some of you guys are mixing the ISA with the microarchitecture with the architecture.

x86ISAovertime.jpg


x86 , Sparc, Power = architecture

x87,SSE4.2,AVX,etc = specific sets of instructions within the ISA

Nehalem, Niagara, Power7+ = microarchitecture (physical manifestation of the compute implementation for the supported ISA)

An architecture can be CISC or RISC, which itself is entirely independent of the implemented microarchitecture.

Architecture and ISA determine code and compiler complexity, microarchitecture determines IPC and pretty much every performance metric (performance/watt, etc) within a given process node.

Some instructions obviously boost the IPC, AVX and FMAC for example, but ultimately it is the circuit logic that gets implemented which determines clockspeeds, latencies, power usage, and time-to-result.

There are pros and cons to both approaches, and obviously neither is truly superior to the other in all the ways that matter given that neither has made the other extinct in the 30+ yrs they have co-existed.

An example of this is the Transmeta Crusoe which handles an x86 CISC architecture but is definitely not a CISC microarchitecture from its very definition.

The Crusoe is a family of x86-compatible microprocessors developed by Transmeta. Crusoe was notable for its method of achieving x86 compatibility. Instead of the instruction set architecture being implemented in hardware, or translated by specialized hardware, the Crusoe runs a software abstraction layer, or a virtual machine, known as the Code Morphing Software (CMS). The CMS translates machine code instructions received from programs into native instructions for the microprocessor. In this way, the Crusoe can emulate other instruction set architectures (ISAs).
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
Some of you guys are mixing the ISA with the microarchitecture with the architecture.

Application instruction set is defined by the architecture and ucode (if at all applicable) is defined by the microarchitecture. Yet they're both instruction sets and subject to the same classification criteria. Saying that a particular uarch's ucode is RISC-like or CISC-like isn't wrong beyond that it's pretty arbitrary what these terms actually mean.

In the case of Intel's mainline P6-descendent CPUs we've really got multiple uarch internal instruction sets, that are also not publicly documented at all so it's kind of difficult to say much about them, but it still works in theory.

You'd have a point if someone was asking "should x86 revert to true CISC?" but I don't see anything like that here, although I suppose it may be implied by asking if Intel should "go back" to CISC.

Some instructions obviously boost the IPC, AVX and FMAC for example, but ultimately it is the circuit logic that gets implemented which determines clockspeeds, latencies, power usage, and time-to-result.

AVX and FMA don't boost instructions per cycle, they boost work per instruction. It works better if you specify performance per clock instead of instructions per cycle.
 

Olikan

Platinum Member
Sep 23, 2011
2,023
275
126
There are pros and cons to both approaches, and obviously neither is truly superior to the other in all the ways that matter given that neither has made the other extinct in the 30+ yrs they have co-existed.

we haven't seen a new CISC for a very long time... the Transmeta don't help since it was done in a 180nm...or i am missing something?