• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Risc vs cisc. Which is better.

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Originally posted by: Nemesis 1
K5 isn't that the cpu design that AMD got when they bought another company. 4 issue wasn't it?

Nah, you are thinking of the K6.

K5 was AMD's first full inhouse design without the help of Intel IP.

Perhaps not too surprisingly for their first stab at doing hard-core chip design the sucker had a little trouble scaling with the clockspeeds (doh! more K10=K5x2 fodder) so management scrambled to buy a "fabless" company that was doing really well and had what many considered the first real pentium-class competitor (worth mentioning at the time anyways as AMD's K5 and Cyrix's )...ala NexGen's Nx586.

Here's the quote from the wiki on nexgen:
When AMD's K5 chip failed to meet performance and sales expectations, AMD purchased NexGen, largely to get the design team and the Nx586's follow-up design, which became the basis for the commercially successful AMD K6.
 
Originally posted by: Idontcare
Anyone remember Mitosis? (speculative multi-threading of single-threaded apps)

Anandtech, Fall IDF 2005 - Day 1, Turning Single into Multi-Threaded with Speculative Threading

I wonder if that is completely dead now or perhaps the concept has been simmering behind closed doors within Intel's terascale thinktanks?

Auto-parallellisation has been an option in the Intel compiler for quite some time. There are SPEC submissions with it enabled. Sun and PGI also have something simular in their compilers.
 
Originally posted by: Nemesis 1
But I would still like to know which is better risc or cisc and why .
CISC was mostly used in the early days of computing and the motivation for using it was to keep code size down. Memory was very expensive and slow back then and by making more complicated chips with more multicycle instructions, you could get away with fewer instructions to accomplish a certain task. This saved memory space and put less stress on the memory busses. Since programs were often written in assembly code, the complex instructions also helped make programs less complex (easier to write/understand).

Fast-forward to modern times and the need for CISC has disappeared completely. Memory is very cheap (and mostly needed for data, not code) and programs are written in high-level languages. So, why choose RISC instead? Because it simplifies chip design greatly. There's no need for a complex frontend to break up the instructions to make them easier to pipeline and execute. There's less "risc" 🙂)) of screwing up on the design level, the design uses less transistors and it's easier to reach a comparable performance level.

So, RISC can be regarded as the better solution, but unless you are a hardware designer or compiler writer you generally shouldn't care.

A bit simplified, but I hope it clears the fog a bit. 🙂
 
Originally posted by: Brunnis
Originally posted by: Nemesis 1
But I would still like to know which is better risc or cisc and why . Also is the register on the front end or back end.
CISC was mostly used in the early days of computing and the motivation for using it was to keep code size down. Memory was very expensive and slow back then and by making more complicated chips with more multicycle instructions, you could get away with fewer instructions to accomplish a certain task. This saved memory space and put less stress on the memory busses. Since programs were often written in assembly code, the complex instructions also helped make programs less complex (easier to write/understand).

Fast-forward to modern times and the need for CISC has disappeared completely. Memory is very cheap (and mostly needed for data, not code) and programs are written in high-level languages. So, why choose RISC instead? Because it simplifies chip design greatly. There's no need for a complex frontend to break up the instructions to make them easier to pipeline and execute. There's less "risc" 🙂)) of screwing up on the design level, the design uses less transistors and it's easier to reach a comparable performance level.

So, RISC can be regarded as the better solution, but unless you are a hardware designer or compiler writer you generally shouldn't care.

A bit simplified, but I hope it clears the fog a bit. 🙂

Thanks so very much for a great reply . Ya broke it down in a simple way that now makes sense to me in other articles I read . :thumbsup:

 
Originally posted by: Nemesis 1
Thanks so very much for a great reply . Ya broke it down in a simple way that now makes sense to me in other articles I read . :thumbsup:
You're welcome. 🙂
 
I think all of this can be broken down to this:

Modern x86 CPU's are the product of mixing some of the best elements of both RISC and CISC. Hybrid vigor if you will.
 
Originally posted by: Idontcare
Anyone remember Mitosis? (speculative multi-threading of single-threaded apps)

Anandtech, Fall IDF 2005 - Day 1, Turning Single into Multi-Threaded with Speculative Threading

I wonder if that is completely dead now or perhaps the concept has been simmering behind closed doors within Intel's terascale thinktanks?


Not dead coming soon to us from INTEL. I know we talked about it. But talking is over now its wate and see.


http://www.xtremesystems.org/f...howthread.php?t=183122
 
Originally posted by: jones377
Originally posted by: Idontcare
Anyone remember Mitosis? (speculative multi-threading of single-threaded apps)

Anandtech, Fall IDF 2005 - Day 1, Turning Single into Multi-Threaded with Speculative Threading

I wonder if that is completely dead now or perhaps the concept has been simmering behind closed doors within Intel's terascale thinktanks?

Auto-parallellisation has been an option in the Intel compiler for quite some time. There are SPEC submissions with it enabled. Sun and PGI also have something simular in their compilers.


Sun = sparc= Elbrus=intel.

 
the primary difference between risc and cisc is that risc instructions are all the same length.

in cisc you can have in structions that have bits that basically indicated if they "continue" on for more bytes. i.e. in cisc there are instructions that say have a variable number of operands etc. i dont really program asm but i remember that intel x86 there are say 4-5 different adder functions and some have say 2 or 3 or 4 operands etc, so the instruction is represented as x bits, or y bits or z bits to the cpu which has to figure it out.


risc is better because you know how long all instructions are. pretty much all modern cpus are just cisc instructoins translated to a backend that takes microops or whatever which are risc.

 
Originally posted by: dmens
The C2D backend is not RISC, not even close. It is less CISC-y than the frontend, but that's about it. I suspect the same applies for all x86 families with a distinguishable backend.

Honestly, the whole RISC versus CISC dicussion seems like a pointless waste of time. There's no actual categorization from when something changes from RISC to CISC, and as time goes on RISC designs become more CISC like anyway as it can be done at little to no impact.


I always viewed the transition AMD made with K5 as one of legal (IP) necessity and not necessarily one of technical superiority or perceived superiority at the time.

Even if 'RISC' was better, using an x86 frontend on a RISC chip at the time had so much silicon overhead that it likely lost too much real estate to really best a traditional x86 chip at the time. (though the K5 didn't do bad)
 
Back
Top