Why is 16-bit w/ 16-bit proc better than 32-bit?

name9902

Member
Feb 12, 2005
48
0
0
Why is it that a 16-bit proc works better with 16-bit programs than a 32-bit processor does? Even though the 32-bit processor would seem to me to be faster and able to handle just as much commands as a 16-bit proc.

In reference to the Pentium Pro and Pentium. Why did the Pentium Pro run DOS and the version of Windows out at the time slower than and less smooth than the Pentium? They were 16-bit applications, and the Pentium Pro was a 32-bit Processor.
 

Bobthelost

Diamond Member
Dec 1, 2005
4,360
0
0
Think of them as different types of engines for cars. A F1 engine can hit massive speeds, but if you wanted to use it in a bus then you're screwed. Different CPU designs are better at different jobs/tasks. Also some programs are optimised so they run best on one particular CPU design. That's just how it works.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
1. Maybe the Pentium Pro was just slower for the workloads being done at the time?
2. The Pentium Pro was a native 32 bit chip, there was probably some kind of conversion process involved to make 16 bit code run on it. In addition, the 16 bit code probably didn't take advantage of the extra hardware features the PPro added.
 

imported_Starglider

Junior Member
May 18, 2005
15
0
0
The P6 design has always run 16-bit x86 code natively. In fact all x86 processors have had hardware level backward compatability without any sort of translation required. AFAIK the three main reasons the original P6 didn't work well on 16-bit code were; (1) the address generation and load units were optimised for 32-bit protected mode, and were not as efficient at accessing memory in the legacy real and 16-bit protected modes, and (2) the P6's internal RISC instruction set was optimised for 32 bit operands and sometimes took more cycles to complete for 16 bit operands than 16-bit optimised operations on earlier processors and (3) the P6 used microcode to implement some legacy x86 instructions that earlier processors implemented directly (microcode is generally much slower than dedicated logic).