Why 64bit consoles?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Aug 23, 2005
200
0
0
Originally posted by: Vee
Originally posted by: xtknight

What is the actual meaning of -bit then? The Athlon 64 is not 64-bit?

How many -bit is the GeForce 6800?

The meaning of "-bit" is what you're happening to be refering to at the time.

When you speak about Athlon 64 as a 64-bit CPU, then that has a specific meaning. Which is highly related to the meaning of "32-bit" and "16-bit" in generations of processors, OS'es and software on the PC. And to the general concept of 16-, 32- and 64-bit class computing, in general.

My point is that you have to be specific. You cannot understand any "-bit" statement, like Nintendo's or PS2's in the same context.

Is it the hardware processing width? Then the A64 is a 752-bit CPU.
Is it the locigal width, data bits that can be computed and committed every cycle? Then the A64 is a 192-bit CPU.
Is it the width of data registers? Then the A64 is a 128-bit CPU.
Is it the width of data that an instruction can work on? Then the A64 is a 128-bit CPU.
Is it the width of buses? Then the A64 is a 128-bit CPU.

The Athlon64 is a 64-bit CPU because it has an operating mode that uses instructions that reserves 64 bits for addressing any piece of data. It has reserved 64 bits for mapping its virtual space. - That is 64-bit computing!
For the very purpose of handling this 64-bit pointer arithmetic, it also has 64-bit integer data registers and instructions.

When it comes to your PS2 "128-bit CPU", it has a logical 128-bit SIMD unit. This is served in hardware by a number of more narrow units. The widest being the 64-bit integer units.

What is commonly understood as 32-bit CPUs, previous generations of Pentiums and Athlons, have for long had similar SIMD registers and instructions, with widths up to 128 bits.

The 486DX had 64-bit registers and operations.
The original Pentium had 64-bit data bus. etc.


Edit: And Mark R, - No the 68000 was a 32-bit CPU, because even if it only had a 16-bit data bus, it still reserved 32 bits for mapping its address space, and it had 32-bit registers for pointer arithmetic. This is quite a point!
It didn't use more than 24 of the 32 address bits though. This was unfortunately used by Apple, amongst others, to screw up software so it didn't work with later 32-bit addressing, with 68020, 68030, 68040, 68060.

The A64 in a similar manner only uses 48 bits for virtual space. In order to stop this type of destructive "creative" behavior on the part of programmers, the AMD86-64 requires all 64-bits of the address to be used. Address must be in 64-bit 'canonical' form, which prevents the upper bits from being used for any different purpose.

best explanation lve seen yet.
 

Googer

Lifer
Nov 11, 2004
12,576
7
81
Originally posted by: the splat in the hat
Originally posted by: Vee
Originally posted by: xtknight

What is the actual meaning of -bit then? The Athlon 64 is not 64-bit?

How many -bit is the GeForce 6800?

The meaning of "-bit" is what you're happening to be refering to at the time.

When you speak about Athlon 64 as a 64-bit CPU, then that has a specific meaning. Which is highly related to the meaning of "32-bit" and "16-bit" in generations of processors, OS'es and software on the PC. And to the general concept of 16-, 32- and 64-bit class computing, in general.

My point is that you have to be specific. You cannot understand any "-bit" statement, like Nintendo's or PS2's in the same context.

Is it the hardware processing width? Then the A64 is a 752-bit CPU.
Is it the locigal width, data bits that can be computed and committed every cycle? Then the A64 is a 192-bit CPU.
Is it the width of data registers? Then the A64 is a 128-bit CPU.
Is it the width of data that an instruction can work on? Then the A64 is a 128-bit CPU.
Is it the width of buses? Then the A64 is a 128-bit CPU.

The Athlon64 is a 64-bit CPU because it has an operating mode that uses instructions that reserves 64 bits for addressing any piece of data. It has reserved 64 bits for mapping its virtual space. - That is 64-bit computing!
For the very purpose of handling this 64-bit pointer arithmetic, it also has 64-bit integer data registers and instructions.

When it comes to your PS2 "128-bit CPU", it has a logical 128-bit SIMD unit. This is served in hardware by a number of more narrow units. The widest being the 64-bit integer units.

What is commonly understood as 32-bit CPUs, previous generations of Pentiums and Athlons, have for long had similar SIMD registers and instructions, with widths up to 128 bits.

The 486DX had 64-bit registers and operations.
The original Pentium had 64-bit data bus. etc.


Edit: And Mark R, - No the 68000 was a 32-bit CPU, because even if it only had a 16-bit data bus, it still reserved 32 bits for mapping its address space, and it had 32-bit registers for pointer arithmetic. This is quite a point!
It didn't use more than 24 of the 32 address bits though. This was unfortunately used by Apple, amongst others, to screw up software so it didn't work with later 32-bit addressing, with 68020, 68030, 68040, 68060.

The A64 in a similar manner only uses 48 bits for virtual space. In order to stop this type of destructive "creative" behavior on the part of programmers, the AMD86-64 requires all 64-bits of the address to be used. Address must be in 64-bit 'canonical' form, which prevents the upper bits from being used for any different purpose.

best explanation lve seen yet.

Great explanation, what is the source?
 

zephyrprime

Diamond Member
Feb 18, 2001
7,512
2
81
Originally posted by: Fox5
Xbox (and pretty much all PC cpus since Pentium MMX) could do 64 bit calculations. MMX allowed for up to 64 bit integer calculations, 3dnow allowed for 64 bit floating point, and SSE allowed for 128 bit floating point...and I think maybe 128 bit integer as well. I think x86 cpus only achieved 64 bit vector computations with x86-64 though, whereas Apple's G4 and G5 cpus could do 128 bit vector computations through Altivec.

BTW, the Jaguar contained two 32 bit processors.
MMX could do 64bit addition but that was about it. It couldn't do 64bit multiply. Primarily, MMX was used for packed integer arithmetic (aka vector math). 3dnow allowed for doing 2 32bit floating point operations in a single 64bit register but it is misleading to say that it could do 64bit floating point. Even with the earliest x87 fpu, 64bit and 80bit floating point was possible. The regular x87 has 80bit registers. SSE allowed for 4 32bit floating point operations to be done at a time and it also allowed for 2 64bit floating point operations to be done in one operation. It did not operate on 128 bit floating point data. However, altivec could.