Risc vs cisc. Which is better.

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
I read something yesterday About C2D . It said that C2D was infact a risc processor.

I can't find it and befor I could post it here . I had another denial of service.

So I had to refind the article but I can't anybody else read it.

There is a war going on about who controls this pc. I am losing LOL! I could easily not be one of the RBN netbots but to many people are involved now that want things as they are so I suffer threw it all.
 

PCTC2

Diamond Member
Feb 18, 2007
3,892
33
91
C2D is a CISC Processor. Any x86 processor is a CISC processor.

RISC is PowerPC and similar. RISC = Reduced Instruction Set Computation. x86 and it's SSE extensions are definitely NOT reduced. :p
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
I know intel P4 was a cisc front end with a risc logic just like amd but this artilce said much more than that
 

v8envy

Platinum Member
Sep 7, 2002
2,720
0
0
AFAIK all x86 cpus are RISC at the silicon level, running microcode that the CISC x86 instruction set translates to. They're RISC from the standpoint that the instructions are pretty simple, and a single x86 instruction translates to multiple native opcodes. They haven't executed the x86 instruction set 'natively' since the 286 or earlier.

In other words, the code your CPU executes in silicon for every clock tick looks absolutely nothing like what's being puked out of your assembler.

Which still doesn't make them 'true' RISC processors up until (arguably) the amd64 -- every RISC design worthy of the name was heavy on general purpose registers. The x86 instruction set has always been light on those, and even in hardware many registers had (have?) specialized uses.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
It is true, ever since the advent of the x86 decoder stage and micro-ops (pentium pro days) the desktop x86 processor has really been a RISC processor with an x86 decoder on the front-end. So in goes CISC x86 instructions, they get micro-op'ed into RISC instructions, processed, and out comes your results.

AMD called their K6 micro-architecture RISC86.

Here's Anand's blurb on micro-ops http://www.anandtech.com/cpuch...howdoc.aspx?i=3276&p=9
 

Foxery

Golden Member
Jan 24, 2008
1,709
0
0
Welcome to, um, 1998 :)

Originally posted by: v8envy
AFAIK all x86 cpus are RISC at the silicon level, running microcode that the CISC x86 instruction set translates to. They're RISC from the standpoint that the instructions are pretty simple, and a single x86 instruction translates to multiple native opcodes. They haven't executed the x86 instruction set 'natively' since the 286 or earlier.

I don't think it happened until the Pentium Pro/ Pentium II family, but otherwise yes.

The original x86 architecture was CISC. Apple made a big fuss over switching to a 100% RISC processor in the late 90s, but since it required a complete replacement of all software and OSes, Intel/Microsoft came up with a solution that remained compatible while gaining a nice speed boost.
 

dmens

Platinum Member
Mar 18, 2005
2,275
965
136
The C2D backend is not RISC, not even close. It is less CISC-y than the frontend, but that's about it. I suspect the same applies for all x86 families with a distinguishable backend.
 

Lord Banshee

Golden Member
Sep 8, 2004
1,495
0
0
Originally posted by: dmens
The C2D backend is not RISC, not even close. It is less CISC-y than the frontend, but that's about it. I suspect the same applies for all x86 families with a distinguishable backend.

You have proof of this?

As with everyone else on this thread, most back-ends of today's x86 CPUs are very much RISC than CISC. The cpu decodes the x86 instruction into micro-ops, these micro-ops are basically RISC instructions.

from "real world technologies" on the barcelona
http://www.realworldtech.com/p...ID=RWT051607033728&p=4
Like the Pentium Pro, the K7/8 has an internal instruction set which is fairly RISC-like, composed of micro-ops. Each micro-op is fairly complex, and can include one load, a computation and a store. Any instruction which decodes into 3 or more micro-ops (called a VectorPath instruction) is sent from the pick buffer to the microcode engine. For example, any string manipulation instruction is likely to be micro-coded. The microcode unit can emit 3 micro-ops a cycle until it has fully decoded the x86 instruction. While the microcode engine is decoding, the regular decoders will idle; the two cannot operate simultaneously. The vast majority of x86 instructions decode into 1-2 micro-ops and are referred to as DirectPath instructions (singles or doubles).


Here is a "Huge" article on the Core2Duo micro-architecture. It talk alot about these micro-ops and it is easy to put one to one together to see these micro-ops are basically RISC like.

http://arstechnica.com/articles/paedia/cpu/core.ars/1
 

phaxmohdem

Golden Member
Aug 18, 2004
1,839
0
0
www.avxmedia.com
"P6 chip, triple the speed of the Pentium... Yeah but its not just the chip, its got a PCI bus, but you knew that... Indeed. RISC architecture is going to change everything... Yeah RISC is good."

Is it bad I know that from memory?
 

dmens

Platinum Member
Mar 18, 2005
2,275
965
136
Originally posted by: Lord Banshee
You have proof of this?

As with everyone else on this thread, most back-ends of today's x86 CPUs are very much RISC than CISC. The cpu decodes the x86 instruction into micro-ops, these micro-ops are basically RISC instructions.

basically RISC? there are even more types of micro-op instructions than macro-op instructions in C2D backend. the reason for the translation is so that x86 code is turned into something with more regularity and organization. that way a single chunk of logic can process a micro-op family, as opposed to a single macro-op.

so imo the instruction set is actually expanded after translation, so not "reduced instruction set computing", but more organized and less complex (in some ways), hence less CISC-y.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
Originally posted by: Idontcare
It is true, ever since the advent of the x86 decoder stage and micro-ops (pentium pro days) the desktop x86 processor has really been a RISC processor with an x86 decoder on the front-end. So in goes CISC x86 instructions, they get micro-op'ed into RISC instructions, processed, and out comes your results.

AMD called their K6 micro-architecture RISC86.

Here's Anand's blurb on micro-ops http://www.anandtech.com/cpuch...howdoc.aspx?i=3276&p=9

K5 was first. Take a look at chapters 1 and 4 of that link.

Originally posted by: Lord Banshee
Originally posted by: dmens
The C2D backend is not RISC, not even close. It is less CISC-y than the frontend, but that's about it. I suspect the same applies for all x86 families with a distinguishable backend.

You have proof of this?

As with everyone else on this thread, most back-ends of today's x86 CPUs are very much RISC than CISC. The cpu decodes the x86 instruction into micro-ops, these micro-ops are basically RISC instructions.

from "real world technologies" on the barcelona
http://www.realworldtech.com/p...ID=RWT051607033728&p=4
Like the Pentium Pro, the K7/8 has an internal instruction set which is fairly RISC-like, composed of micro-ops. Each micro-op is fairly complex, and can include one load, a computation and a store. Any instruction which decodes into 3 or more micro-ops (called a VectorPath instruction) is sent from the pick buffer to the microcode engine. For example, any string manipulation instruction is likely to be micro-coded. The microcode unit can emit 3 micro-ops a cycle until it has fully decoded the x86 instruction. While the microcode engine is decoding, the regular decoders will idle; the two cannot operate simultaneously. The vast majority of x86 instructions decode into 1-2 micro-ops and are referred to as DirectPath instructions (singles or doubles).


Here is a "Huge" article on the Core2Duo micro-architecture. It talk alot about these micro-ops and it is easy to put one to one together to see these micro-ops are basically RISC like.

http://arstechnica.com/articles/paedia/cpu/core.ars/1

Those articles are simplified for the layperson. dmens is right. For what it's worth though, having a micro-op that does a load, operation, and store is very much not RISC (RISC would have 3 ops for that sequence, not 1). The PPro backend might have been more RISCy, but modern Intel architectures (at least since Intel started fusing micro-ops to get some of the benefits of AMD's method) aren't really RISC.
 

dmens

Platinum Member
Mar 18, 2005
2,275
965
136
Originally posted by: CTho9305
Those articles are simplified for the layperson. dmens is right. For what it's worth though, having a micro-op that does a load, operation, and store is very much not RISC (RISC would have 3 ops for that sequence, not 1). The PPro backend might have been more RISCy, but modern Intel architectures (at least since Intel started fusing micro-ops to get some of the benefits of AMD's method) aren't really RISC.

I think even P6 was following the same translation philosophy, not because they wanted to do a RISC backend, but simply to organize x86 into something that made more damn sense.

curious about the K families. do you have any insights into the matter?
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
Originally posted by: dmens
Originally posted by: CTho9305
Those articles are simplified for the layperson. dmens is right. For what it's worth though, having a micro-op that does a load, operation, and store is very much not RISC (RISC would have 3 ops for that sequence, not 1). The PPro backend might have been more RISCy, but modern Intel architectures (at least since Intel started fusing micro-ops to get some of the benefits of AMD's method) aren't really RISC.

I think even P6 was following the same translation philosophy, not because they wanted to do a RISC backend, but simply to organize x86 into something that made more damn sense.

curious about the K families. do you have any insights into the matter?

I don't know the original motivations, but I would assume it was pretty much the same. I have heard from outside sources that K5 looked like an x86 frontend slapped on to an R29k RISC chip, but outside sources tend not to know what's really happening under the hood.

One piece of evidence that the goal isn't a RISC backend is the fact that with each generation, more and more operations are going from multiple micro-ops to single (or at least fewer) micro-ops (you can find lists in the optimization guides for K7 through family 10h). You spend extra transistors to gain performance - a good tradeoff as transistors become cheaper.
 

Lord Banshee

Golden Member
Sep 8, 2004
1,495
0
0
Originally posted by: dmens
Originally posted by: Lord Banshee
You have proof of this?

As with everyone else on this thread, most back-ends of today's x86 CPUs are very much RISC than CISC. The cpu decodes the x86 instruction into micro-ops, these micro-ops are basically RISC instructions.

basically RISC? there are even more types of micro-op instructions than macro-op instructions in C2D backend. the reason for the translation is so that x86 code is turned into something with more regularity and organization. that way a single chunk of logic can process a micro-op family, as opposed to a single macro-op.

so imo the instruction set is actually expanded after translation, so not "reduced instruction set computing", but more organized and less complex (in some ways), hence less CISC-y.

That makes quite more sense on why you call it CISC-y.

I guess i was just thinking of it being much simpler instructions unlike the crazy x86 instructions, but that isn't what make a "Reduced"ISC ISA RISC. It is the "Reduced" part and with as many micro-ops there are, i guess you are right.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: CTho9305
Originally posted by: dmens
Originally posted by: CTho9305
Those articles are simplified for the layperson. dmens is right. For what it's worth though, having a micro-op that does a load, operation, and store is very much not RISC (RISC would have 3 ops for that sequence, not 1). The PPro backend might have been more RISCy, but modern Intel architectures (at least since Intel started fusing micro-ops to get some of the benefits of AMD's method) aren't really RISC.

I think even P6 was following the same translation philosophy, not because they wanted to do a RISC backend, but simply to organize x86 into something that made more damn sense.

curious about the K families. do you have any insights into the matter?

I don't know the original motivations, but I would assume it was pretty much the same. I have heard from outside sources that K5 looked like an x86 frontend slapped on to an R29k RISC chip, but outside sources tend not to know what's really happening under the hood.

One piece of evidence that the goal isn't a RISC backend is the fact that with each generation, more and more operations are going from multiple micro-ops to single (or at least fewer) micro-ops (you can find lists in the optimization guides for K7 through family 10h). You spend extra transistors to gain performance - a good tradeoff as transistors become cheaper.

Don't overlook the 50k-ft marketing and legal issue at that time which was that AMD could 100% use Intel's 486 chip design verbatim for their own sales efforts but they were only allowed to make x86 compatible chips thereafter by court order. No pentium clones in the fashion that everyone under the sun was churning out 486 clones.

So creating a "pentium-class" competitive chip required doing something drastically non-CISC in terms of what AMD knew about CISC from 486 and prior designs.

I always viewed the transition AMD made with K5 as one of legal (IP) necessity and not necessarily one of technical superiority or perceived superiority at the time.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: Nemesis 1
I read something yesterday About C2D . It said that C2D was infact a risc processor.

I can't find it and befor I could post it here . I had another denial of service.

So I had to refind the article but I can't anybody else read it.

There is a war going on about who controls this pc. I am losing LOL! I could easily not be one of the RBN netbots but to many people are involved now that want things as they are so I suffer threw it all.

Nemesis I very much suspect it was this article on Nehalem (contrasted to Core 2 and Barcelona) on Real World Tech: http://www.realworldtech.com/p...ID=RWT040208182719&p=5
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
No thats not it a read that earlier. But having a hard time remembering what I read.

My mind isn't working very well yesterday and today. I feel pretty good . Jusy can't think well. But thanks for the link.

After reading the replies I can see clearlty misread that article.

But I would still like to know which is better risc or cisc and why . Also is the register on the front end or back end.
 

dmens

Platinum Member
Mar 18, 2005
2,275
965
136
the risc vs cisc argument has been dead for about a decade if not more.

also, if a design has a defined frontend/backend, the registers are in the backend, never in the front. so not sure what your question is.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: Idontcare
Originally posted by: CTho9305
Originally posted by: dmens
Originally posted by: CTho9305
Those articles are simplified for the layperson. dmens is right. For what it's worth though, having a micro-op that does a load, operation, and store is very much not RISC (RISC would have 3 ops for that sequence, not 1). The PPro backend might have been more RISCy, but modern Intel architectures (at least since Intel started fusing micro-ops to get some of the benefits of AMD's method) aren't really RISC.

I think even P6 was following the same translation philosophy, not because they wanted to do a RISC backend, but simply to organize x86 into something that made more damn sense.

curious about the K families. do you have any insights into the matter?

I don't know the original motivations, but I would assume it was pretty much the same. I have heard from outside sources that K5 looked like an x86 frontend slapped on to an R29k RISC chip, but outside sources tend not to know what's really happening under the hood.

One piece of evidence that the goal isn't a RISC backend is the fact that with each generation, more and more operations are going from multiple micro-ops to single (or at least fewer) micro-ops (you can find lists in the optimization guides for K7 through family 10h). You spend extra transistors to gain performance - a good tradeoff as transistors become cheaper.

Don't overlook the 50k-ft marketing and legal issue at that time which was that AMD could 100% use Intel's 486 chip design verbatim for their own sales efforts but they were only allowed to make x86 compatible chips thereafter by court order. No pentium clones in the fashion that everyone under the sun was churning out 486 clones.

So creating a "pentium-class" competitive chip required doing something drastically non-CISC in terms of what AMD knew about CISC from 486 and prior designs.

I always viewed the transition AMD made with K5 as one of legal (IP) necessity and not necessarily one of technical superiority or perceived superiority at the time.


K5 isn't that the cpu design that AMD got when they bought another company. 4 issue wasn't it?