Originally posted by: josh609
I was playing my N64 last night.......when it hit me. Why was the Nintendo 64 64bit? Why have we seemed to go back to 32bit on consoles? Also, why were the PS2 and the dreamcast 128bit processors? Please explain......
Indeed. As one of those responsible for fraying this thread, I'll try.
The Jaguar claim to 64-bitness relates directly to the perception that wider data chunks is greater performance. It had a 64-bit bus and two 64-bit specialized devices - a blitter and an "object processor" - that were involved with manipulating and moving pixel data.
So in terms of width = performance, the Jauar's claim is perfectly valid. It was the first 64-bit console.
Things start to be complicated when you want to compare these bits to the bitness of anything else. Modern consoles or computers.
For one thing, Pentium introduced 64-bit bus, and MMX serves the same purpose as a blitter. And it also operates on 64-bit wide segments. Yet it stays percieved as 32-bit. The easy route out of this mess is to say that computers and general purpose CPUs is a different thing.
Computers:
While there is generally a lot of agreement on what -bit classification any particular processor belongs to, there is no agreement on any definition. The one Peter use for the P4, is very popular. The reason for that is that it happens to fit (sofar, and with some contortions - "general purpose registers") computer & processor generations since the dawn of computing.
It is a potential sea of troubles though. One thing is that while this particular -bit property was relevant early on, that relevance is today mostly coincidal. So it has become less useful, even somewhat misleading, as a starting point of understanding the capabilities of whatever-bit.
Intuitively, one wants to understand n-bit width as a performance related property.
And it most certainly is. If, and only
IF, it means that the processor can do something in one single crunch that it would otherwise require multiple operations to do.
There is no point in having wider logical operations than you have requirements for by some basic data type.
However, there is the opportunity to simultaneously handle several elements of a basic data type, side by side in a wider segment.
This is what modern CPUs do with their SIMD registers and instructions. SIMD, single instruction, multiple data. Also goes by the name vector processing.
Multiple pieces of sequential data, side by side, is also how the wider buses are used. It is also basically what consoles and videocards do with their 64/128/256 -bit architectures.
No console has (yet) any reason to handle any basic single data type of greater width than 32 bits.
(The Jaguar didn't either. My wild guess at 64-bit fixed point arithmetic was wrong. The Jaguar is not capable of doing that fast, as it does not have any general 64-bit ALU. Also, I gather, game programming in those early days largely made do with shorter fixed point math.)
For a computer/general CPU, there are also very few opportunities to make good use of any ability to compute integers of greater width than 32 bits. Looking at width, we see that the CPU has indeed grown in width for performance reasons, for those particular things that have been useful: Floating point of higher precision (64-bit), buses and datapaths, vector processing.
Suppose we had a compelling need for using longer integers than 32bit only for the purpose of manipulating some data. Then it's absolutely assured that we'd have seen an extension of wider integers long ago, just as we've seen wider FPU and SIMD extensions.
We would then have the benefit of higher performance when computing wide integers.
But if everything else had stayed the same, would we then have anything different from a "32-bit processor"? Here comes the great clash and entanglement! But fortunately, we don't have that situation, so I will elect to not go down that route, in the hope that it will save me a lot of tiresome discussion.
Instead focus on what, for instance, AMD86-64 brings along under the "64-bit" banner. AMD's sole purpose with AMD86-64 was to introduce a new operating mode. When WindowsXP-x64 edition or Vista64 runs on our computers, the CPU will be permanently set to 'long mode' and will map virtual 64-bit addresses to hardware. That's really it! That's the purpose behind 64-bit, and that's the one thing that will enable a whole new magnitude of capabilities for the PC. Period!
...But coincidal to that, now, and only first now, comes the requirement to deal with 64-bit integers. More specifically pointers. So for that very purpose, width of registers and instructions handling integers have grown to 64-bit.
While 64-bit integer registers and operations are going to be beneficial for other purposes - increasingly so with the larger data models of future computing - the sole real reason for their inclusion in AMD86-64 is for address arithmetic. Period.
Game consoles:
While we see that capabilities to handle wide segments of multiple sequential pieces of data does not affect the n-bit perception or label of computers and general purpose CPUs, it most certainly does when it comes to specialized processors, graphics cards, early consoles.
Here the n-bit property is used for a much more direct and easily understood paradigm. Simply the bit-width at which things are done. In registers and on buses.
In this sense the Jaguar's claim to 64-bit is perfectly valid. And while I don't know much about game consoles, I would hazard a guess that so is N64's. And PS2's and Dreamcast's claim to 128-bit.
"why have we seemed to go back to 32bit on consoles" ?
Well, I would guess that has much to do with marketing climate.
Because of the state of economy and technology, early gaming devices where made with 16-bit and 32-bit components. And I think a big deal was made of 32-bit by marketing at the time, so it's sort of a logical next step to boast about 64 bits.
No console have (yet) any reason to compute wider pieces of singular data than 32 bit. And they don't. And this bit thing marketing becomes increasingly vulnurable to flak. I've checked up the Jaguar, and it's 64-bitness was the subject of much contention and debate.
The thing is that the only thing that gets done in 64-bit chunks on the Jaguar is pixel manipulations. For the rest, it's 32-bit.
Comparing to modern consoles you would have to compare it to the graphics chips. I believe they are very wide indeed. 256-bit? Maybe?
But all that have become sort of a seperate entity on later 3D oriented consoles. Focus have shifted from bus width, sprites, blobs and other 2D pixel stuff towards the processing power available to run a 3D engine.
That processing power is now floating point. The performance "width" of computing is now served by parallel computing. Both by vector units, and lately by multiple cores.
And each of these lanes is 32-bit, because they don't need to be wider. There is better use for more of them.
So my answer would be that we seem to be back to 32 bits, because marketing is focusing on a different component. I'm fairly certain that in the specific regards that the N64 and Jaguar was 64-bit, modern consoles are just as wide or wider.