Why 64bit consoles?

josh609

Member
Aug 8, 2005
194
0
0
I was playing my N64 last night.......when it hit me. Why was the Nintendo 64 64bit? Why have we seemed to go back to 32bit on consoles? Also, why were the PS2 and the dreamcast 128bit processors? Please explain......
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
Why? Because if you need to move more stuff, you can either make it move faster, or on more transport lanes. Graphics, particularly 3D, is all about moving data and computing stuff quickly. So when you've hit the physical limit of making your design faster, make it wider.

Besides, the first console to contain 64-bit processors was Atari's Jaguar, in 1993.
 

Vee

Senior member
Jun 18, 2004
689
0
0
Originally posted by: josh609
I was playing my N64 last night.......when it hit me. Why was the Nintendo 64 64bit? Why have we seemed to go back to 32bit on consoles? Also, why were the PS2 and the dreamcast 128bit processors? Please explain......

The Nintendo 64 wasn't 64bit. Nor the Jaguar. Bits, and particularly the bits in 16/32/64 -bit computing are much abused and misunderstood. For console marketing it's enough that anything inside is 64 bit wide, and then it's "64-bit".

Nor is there any 128 bit processor. It's all intentional misunderstandings.

 

xtknight

Elite Member
Oct 15, 2004
12,974
0
71
Originally posted by: Vee
Originally posted by: josh609
I was playing my N64 last night.......when it hit me. Why was the Nintendo 64 64bit? Why have we seemed to go back to 32bit on consoles? Also, why were the PS2 and the dreamcast 128bit processors? Please explain......

The Nintendo 64 wasn't 64bit. Nor the Jaguar. Bits, and particularly the bits in 16/32/64 -bit computing are much abused and misunderstood. For console marketing it's enough that anything inside is 64 bit wide, and then it's "64-bit".

Nor is there any 128 bit processor. It's all intentional misunderstandings.

What is the actual meaning of -bit then? The Athlon 64 is not 64-bit?

How many -bit is the GeForce 6800?


http://www.gamecrazy.com/ps2/faq.aspx

First of Its Kind

The 128-bit CPU is the first of its kind in the world, integrated with the state-of-the-art 0.15 micron process technology on a single LSI.

The new CPU incorporates two 64-bit integer units (IU) with a 128-bit SIMD multimedia command unit, three independent floating point vector calculation units (FPU, VU0, VU1), an MPEG 2 decoder circuit (Image Processing Unit/IPU), and high-performance DMA controllers onto one silicon chip.
 

Mark R

Diamond Member
Oct 9, 1999
8,513
16
81
As I understand, the 'bitness' of a processor is a measure of how wide its general purpose registers and ALUs are - i.e. how much data can be processed in one operation.

An Athlon XP has 32 bit registers so is a 32 bit processor.
An Athlon 64, as well as having the 32 bit compatible registers, has a set of 64 bit registers and 64 bit ALUs to work on them. It's a 64 bit processor.

The N64 used a MIPS 4300 CPU. It used 64 64bit general purpose registers and had 64 bit ALUs. It's undoutedly a 64 bit processor by any definition.

The Jaguar used a Motorola 68000 CPU (same as the Macintosh classic, Amiga and Atari ST). This is a 16 bit processor.
However, the Jaguar did have a 64 bit GPU (data was processed in 64 bit words).
 

Vee

Senior member
Jun 18, 2004
689
0
0
Originally posted by: xtknight

What is the actual meaning of -bit then? The Athlon 64 is not 64-bit?

How many -bit is the GeForce 6800?

The meaning of "-bit" is what you're happening to be refering to at the time.

When you speak about Athlon 64 as a 64-bit CPU, then that has a specific meaning. Which is highly related to the meaning of "32-bit" and "16-bit" in generations of processors, OS'es and software on the PC. And to the general concept of 16-, 32- and 64-bit class computing, in general.

My point is that you have to be specific. You cannot understand any "-bit" statement, like Nintendo's or PS2's in the same context.

Is it the hardware processing width? Then the A64 is a 752-bit CPU.
Is it the locigal width, data bits that can be computed and committed every cycle? Then the A64 is a 192-bit CPU.
Is it the width of data registers? Then the A64 is a 128-bit CPU.
Is it the width of data that an instruction can work on? Then the A64 is a 128-bit CPU.
Is it the width of buses? Then the A64 is a 128-bit CPU.

The Athlon64 is a 64-bit CPU because it has an operating mode that uses instructions that reserves 64 bits for addressing any piece of data. It has reserved 64 bits for mapping its virtual space. - That is 64-bit computing!
For the very purpose of handling this 64-bit pointer arithmetic, it also has 64-bit integer data registers and instructions.

When it comes to your PS2 "128-bit CPU", it has a logical 128-bit SIMD unit. This is served in hardware by a number of more narrow units. The widest being the 64-bit integer units.

What is commonly understood as 32-bit CPUs, previous generations of Pentiums and Athlons, have for long had similar SIMD registers and instructions, with widths up to 128 bits.

The 486DX had 64-bit registers and operations.
The original Pentium had 64-bit data bus. etc.


Edit: And Mark R, - No the 68000 was a 32-bit CPU, because even if it only had a 16-bit data bus, it still reserved 32 bits for mapping its address space, and it had 32-bit registers for pointer arithmetic. This is quite a point!
It didn't use more than 24 of the 32 address bits though. This was unfortunately used by Apple, amongst others, to screw up software so it didn't work with later 32-bit addressing, with 68020, 68030, 68040, 68060.

The A64 in a similar manner only uses 48 bits for virtual space. In order to stop this type of destructive "creative" behavior on the part of programmers, the AMD86-64 requires all 64-bits of the address to be used. Address must be in 64-bit 'canonical' form, which prevents the upper bits from being used for any different purpose.

 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Originally posted by: josh609
I was playing my N64 last night.......when it hit me. Why was the Nintendo 64 64bit? Why have we seemed to go back to 32bit on consoles? Also, why were the PS2 and the dreamcast 128bit processors? Please explain......

N64 was capable of doing 64 bit calculations because it was derived from a high end workstation, and because its cpu still did a large part of the graphics work, for which 64 bit precision was useful. I don't think it had 64 bit memory addressibility.

Dreamcast only had a 64 bit processor, but was hyped as a 128 bit system (I believe the graphics chip had a 128-bit memory bus or something like that) to compete with the PS2.

PS2 had little reason to be 128-bit, considering it had a dedicated graphics processor and that most consumer level 3d work can be handled with 64 bit.

Gamecube could do 64 bit calculations, with 32 bit memory addressibility.

Xbox (and pretty much all PC cpus since Pentium MMX) could do 64 bit calculations. MMX allowed for up to 64 bit integer calculations, 3dnow allowed for 64 bit floating point, and SSE allowed for 128 bit floating point...and I think maybe 128 bit integer as well. I think x86 cpus only achieved 64 bit vector computations with x86-64 though, whereas Apple's G4 and G5 cpus could do 128 bit vector computations through Altivec.

BTW, the Jaguar contained two 32 bit processors.
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
Quoting from Atari history ...

The Jaguar has five processors which are contained in three chips. Two of
the chips are proprietary designs, nicknamed "Tom" and "Jerry". The third
chip is a standard Motorola 68000, and used as a coprocessor. Tom and
Jerry are built using an 0.5 micron silicon process. With proper
programming, all five processors can run in parallel.

- "Tom"
- 750,000 transistors, 208 pins
- Graphics Processing Unit (processor #1)
- 32-bit RISC architecture (32/64 processor)
- 64 registers of 32 bits wide
- Has access to all 64 bits of the system bus
- Can read 64 bits of data in one instruction
- Rated at 26.591 MIPS (million instructions per second)
- Runs at 26.591 MHz
- 4K bytes of zero wait-state internal SRAM
- Performs a wide range of high-speed graphic effects
- Programmable
- Object processor (processor #2)
- 64-bit RISC architecture
- 64-bit wide registers
- Programmable processor that can act as a variety of different video
architectures, such as a sprite engine, a pixel-mapped display, a
character-mapped system, and others.
- Blitter (processor #3)
- 64-bit RISC architecture
- 64-bit wide registers
- Performs high-speed logical operations
- Hardware support for Z-buffering and Gouraud shading
- DRAM memory controller
- 64 bits
- Accesses the DRAM directly

- "Jerry"
- 600,000 transistors, 144 pins
- Digital Signal Processor (processor #4)
- 32 bits (32-bit registers)
- Rated at 26.6 MIPS (million instructions per second)
- Runs at 26.6 MHz
- Same RISC core as the Graphics Processing Unit
- Not limited to sound generation
- 8K bytes of zero wait-state internal SRAM
- CD-quality sound (16-bit stereo)
- Number of sound channels limited by software
- Two DACs (stereo) convert digital data to analog sound signals
- Full stereo capabilities
- Wavetable synthesis, FM synthesis, FM Sample synthesis, and AM
synthesis
- A clock control block, incorporating timers, and a UART
- Joystick control

- Motorola 68000 (processor #5)
- Runs at 13.295MHz
- General purpose control processor

Communication is performed with a high speed 64-bit data bus, rated at
106.364 megabytes/second. The 68000 is only able to access 16 bits of
this bus at a time.
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
Thus, Jag's got 64-bit object processor and blitter, a 64-bit datapath, a partially-64-bit GPU, a 32-bit sound engine and a 32-bit general purpose processor.

I'd say this is about as 64-bit as it gets.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Originally posted by: Peter
Thus, Jag's got 64-bit object processor and blitter, a 64-bit datapath, a partially-64-bit GPU, a 32-bit sound engine and a 32-bit general purpose processor.

I'd say this is about as 64-bit as it gets.

Except in PCs, only the general purpose processor is counted as 64 bit. I'm sure the sound cards and video cards in PCs are well above 64 bit.
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
Originally posted by: Fox5Except in PCs, only the general purpose processor is counted as 64 bit. I'm sure the sound cards and video cards in PCs are well above 64 bit.

That's because the PC is a general purpose machine. Consoles are different.

PC sound cards (the consumer kind) are only just now venturing into 24-bit processing, for graphics cards I can't comment - except that at the Jaguar's time, its technology was far enough beyond PC graphics to warrant developping PC graphics cards that used the Jaguar chipset. They didn't make it to market before Atari imploded, sadly.
 

Vee

Senior member
Jun 18, 2004
689
0
0
Originally posted by: Peter
Thus, Jag's got 64-bit object processor and blitter, a 64-bit datapath, a partially-64-bit GPU, a 32-bit sound engine and a 32-bit general purpose processor.

I'd say this is about as 64-bit as it gets.

ehum,.. as you yourself stated, it has a 32-bit processor. And that 32-bit processor lacks protected mode addressing, lacks FPU. And if you're right about that it's a 68000 (and not a 68020 or 68030), then it has 16-bit data bus and 16-bit hardware units. Anyway, the 68000 sort of corresponds to a '386SX, to put it in relation to the PC's technology. And it runs the show. The rest are co-processors, and essentially videochips and soundchips.
And yes, it features a number of 64-bit wide elements.

One doesn't get anywhere on this subject, if the question of what one expects from the bits is never asked. Performance? Fine, but why will it get you performance?
A wider width will only get you better performance if, and only if, it can do something on that width of bits, that you previously needed more operations to do.

While I don't know anything about the Jaguar, I think I can assume that 64-bit operations mostly consisted of moving data. In that case, you can grab eight sequential bytes and treat them as if they were one 64-bit data.
And secondly, for doing fixed point math, - a fast and simple way to get some measure of fractional math from an integer unit, in the days before the FPU was established.

None of this is anything fancy today, of course. Never mind the fact that other hardware than the CPU do this kind of work in the PC (just as the coprocessors in the Jaguar), chipset, sound and video hardware.
"32-bit" CPUs themselves, like the P4, have for long been able to shuffle data in 128-bit chunks. Both when they explicitly use instructions operating on 128-bit long data segments and 128-bit registers, and when not. And able to perform four 32-bit FP, or two 64-bit FP, operations on sequential data in a 128-bit chunk with one instruction, using 128-bit registers.
Yet noone refers to these 32-bit CPUs as anything else than "32-bit".

I hope that you - without feeling antagonized - now see why I advocate some caution, when using these -bit statements?

My point is - why do you care about that the Jaguar is 64-bit? What is that supposed to mean? In comparison to what? That isn't "64-bit"?

By all means, refer to the Jaguar as "64-bit".
I don't want to be argumentative. I'm posting because I want people to understand. It needs qualification.

If current consoles suddenly are back to "32-bit", as the OP's question suggested, then I would guess it's because 32-bit floating point beats the crap out of 64-bit fixed point. And I would further guess it's because it's, at least currently, no longer fashionable to speak in terms of "128-bit processor", just because it happens to be able to move around 128 bits.
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
P4 has been considered 32-bit for the simple fact that its INSTRUCTION SET (the core set) can't handle data wider than 32 bits. Only with AMD's 64-bit extensions, later adapted by Intel into theirs, did the processors get 64-bit wide registers. (I'm talking about the general purpose instructions, not the specialized extensions like FPU, MMX, SSE.) This is not about data or address width, it's about instruction set. That's also why the 68000 is considered a 32-bit processor - with 24-bit addressing and 16-bit external datapath, but that's not the point.

The 68k in the Jag _coordinates_ the show. The actual gameplay, graphics and sound work is done by the powerhorses, obviously not by the weakest chip in there. I can hook you up to people who actually programmed commercial games on this machine if you want more detail.

But nevermind, generalizing statements like

>32-bit floating point beats the crap out of 64-bit fixed point

already demonstrate that you don't have the faintest.
 

Vee

Senior member
Jun 18, 2004
689
0
0
Originally posted by: josh609
I was playing my N64 last night.......when it hit me. Why was the Nintendo 64 64bit? Why have we seemed to go back to 32bit on consoles? Also, why were the PS2 and the dreamcast 128bit processors? Please explain......

Indeed. As one of those responsible for fraying this thread, I'll try.

The Jaguar claim to 64-bitness relates directly to the perception that wider data chunks is greater performance. It had a 64-bit bus and two 64-bit specialized devices - a blitter and an "object processor" - that were involved with manipulating and moving pixel data.
So in terms of width = performance, the Jauar's claim is perfectly valid. It was the first 64-bit console.

Things start to be complicated when you want to compare these bits to the bitness of anything else. Modern consoles or computers.
For one thing, Pentium introduced 64-bit bus, and MMX serves the same purpose as a blitter. And it also operates on 64-bit wide segments. Yet it stays percieved as 32-bit. The easy route out of this mess is to say that computers and general purpose CPUs is a different thing.


Computers:

While there is generally a lot of agreement on what -bit classification any particular processor belongs to, there is no agreement on any definition. The one Peter use for the P4, is very popular. The reason for that is that it happens to fit (sofar, and with some contortions - "general purpose registers") computer & processor generations since the dawn of computing.

It is a potential sea of troubles though. One thing is that while this particular -bit property was relevant early on, that relevance is today mostly coincidal. So it has become less useful, even somewhat misleading, as a starting point of understanding the capabilities of whatever-bit.

Intuitively, one wants to understand n-bit width as a performance related property.
And it most certainly is. If, and only IF, it means that the processor can do something in one single crunch that it would otherwise require multiple operations to do.
There is no point in having wider logical operations than you have requirements for by some basic data type.

However, there is the opportunity to simultaneously handle several elements of a basic data type, side by side in a wider segment.
This is what modern CPUs do with their SIMD registers and instructions. SIMD, single instruction, multiple data. Also goes by the name vector processing.

Multiple pieces of sequential data, side by side, is also how the wider buses are used. It is also basically what consoles and videocards do with their 64/128/256 -bit architectures.

No console has (yet) any reason to handle any basic single data type of greater width than 32 bits.

(The Jaguar didn't either. My wild guess at 64-bit fixed point arithmetic was wrong. The Jaguar is not capable of doing that fast, as it does not have any general 64-bit ALU. Also, I gather, game programming in those early days largely made do with shorter fixed point math.)

For a computer/general CPU, there are also very few opportunities to make good use of any ability to compute integers of greater width than 32 bits. Looking at width, we see that the CPU has indeed grown in width for performance reasons, for those particular things that have been useful: Floating point of higher precision (64-bit), buses and datapaths, vector processing.

Suppose we had a compelling need for using longer integers than 32bit only for the purpose of manipulating some data. Then it's absolutely assured that we'd have seen an extension of wider integers long ago, just as we've seen wider FPU and SIMD extensions.
We would then have the benefit of higher performance when computing wide integers.
But if everything else had stayed the same, would we then have anything different from a "32-bit processor"? Here comes the great clash and entanglement! But fortunately, we don't have that situation, so I will elect to not go down that route, in the hope that it will save me a lot of tiresome discussion.

Instead focus on what, for instance, AMD86-64 brings along under the "64-bit" banner. AMD's sole purpose with AMD86-64 was to introduce a new operating mode. When WindowsXP-x64 edition or Vista64 runs on our computers, the CPU will be permanently set to 'long mode' and will map virtual 64-bit addresses to hardware. That's really it! That's the purpose behind 64-bit, and that's the one thing that will enable a whole new magnitude of capabilities for the PC. Period!
...But coincidal to that, now, and only first now, comes the requirement to deal with 64-bit integers. More specifically pointers. So for that very purpose, width of registers and instructions handling integers have grown to 64-bit.
While 64-bit integer registers and operations are going to be beneficial for other purposes - increasingly so with the larger data models of future computing - the sole real reason for their inclusion in AMD86-64 is for address arithmetic. Period.


Game consoles:

While we see that capabilities to handle wide segments of multiple sequential pieces of data does not affect the n-bit perception or label of computers and general purpose CPUs, it most certainly does when it comes to specialized processors, graphics cards, early consoles.
Here the n-bit property is used for a much more direct and easily understood paradigm. Simply the bit-width at which things are done. In registers and on buses.

In this sense the Jaguar's claim to 64-bit is perfectly valid. And while I don't know much about game consoles, I would hazard a guess that so is N64's. And PS2's and Dreamcast's claim to 128-bit.

"why have we seemed to go back to 32bit on consoles" ?

Well, I would guess that has much to do with marketing climate.

Because of the state of economy and technology, early gaming devices where made with 16-bit and 32-bit components. And I think a big deal was made of 32-bit by marketing at the time, so it's sort of a logical next step to boast about 64 bits.

No console have (yet) any reason to compute wider pieces of singular data than 32 bit. And they don't. And this bit thing marketing becomes increasingly vulnurable to flak. I've checked up the Jaguar, and it's 64-bitness was the subject of much contention and debate.
The thing is that the only thing that gets done in 64-bit chunks on the Jaguar is pixel manipulations. For the rest, it's 32-bit.

Comparing to modern consoles you would have to compare it to the graphics chips. I believe they are very wide indeed. 256-bit? Maybe?
But all that have become sort of a seperate entity on later 3D oriented consoles. Focus have shifted from bus width, sprites, blobs and other 2D pixel stuff towards the processing power available to run a 3D engine.

That processing power is now floating point. The performance "width" of computing is now served by parallel computing. Both by vector units, and lately by multiple cores.
And each of these lanes is 32-bit, because they don't need to be wider. There is better use for more of them.

So my answer would be that we seem to be back to 32 bits, because marketing is focusing on a different component. I'm fairly certain that in the specific regards that the N64 and Jaguar was 64-bit, modern consoles are just as wide or wider.
 

Elcs

Diamond Member
Apr 27, 2002
6,278
6
81
Could someone explain the Amiga CD32 to me then?

Im still merely 20 years old now so i was quite a bit younger back then and I remember the CD32 being touted as the first ever "32-bit console". After reading this, I feel I should update a little bit of my history.
 

icarus4586

Senior member
Jun 10, 2004
219
0
0
The only valid reason for a CPU to be referred to as XX-bit should be based on its memory address space.

The N64 had a MIPS 4300 CPU that was 64 bit in this sense; its hardware registers could each hold 64 bits of data. Thus it had a 64 bit memory space. Since it only had 4 or 8 MB of RAM, it wasn't really useful. I believe the processor was very similar to that used in SGI workstations.

While I'm not so sure about the Jaguar, I do know that the Motorola 68000 has 16 32 bit registers, allowing for a 32 memory address space. Its external bus was 16 bits wide. It had 2 16 bit ALUs, so in order for it to do 32 bit math operations, it had to use both ALUs at once, sacrificing speed.

The Athlon 64 is "64 bit" in this sense. AMD updated the i386 ISA to have 16 64 bit registers instead of 8 32 bit registers. When running in compatibility mode, only these 8 are used.

As far as more recent consoles like the Gamecube, PS2, and XBox, their "bitness" is marketing BS. Maybe they have 128 bit data buses. Big deal. Maybe they can perform operations with 128 bit operands. Big deal. Marketers like to quote big numbers. If the Dreamcast is 128 bit, then obviously it's twice as good as the N64, right?

The XBox has a 32 bit x86 CPU. The Gamecube has a 32 bit PowerPC CPU. I'm not sure about the PS2, the Emotion Engine is weird. It's based on the MIPS ISA, that's about all I know.

Bottom line? Most of the stuff you hear about consoles is marketing.
 

Varun

Golden Member
Aug 18, 2002
1,161
0
0
Originally posted by: icarus4586
The only valid reason for a CPU to be referred to as XX-bit should be based on its memory address space.

While I'm not so sure about the Jaguar, I do know that the Motorola 68000 has 16 32 bit registers, allowing for a 32 memory address space. Its external bus was 16 bits wide. It had 2 16 bit ALUs, so in order for it to do 32 bit math operations, it had to use both ALUs at once, sacrificing speed.

The Athlon 64 is "64 bit" in this sense. AMD updated the i386 ISA to have 16 64 bit registers instead of 8 32 bit registers. When running in compatibility mode, only these 8 are used.

The Athlon 64 can only address 48 bits or 32 Terrabytes of RAM locations (256TB RAM) as opposed to the 2 Exabytes of memory locations (16 Exabytes of RAM) if it had a 64 bit address.

I'm in the camp that the width of the general purpose registers is what classifies a CPU as far as bits.
 

Vee

Senior member
Jun 18, 2004
689
0
0
Originally posted by: Varun
The Athlon 64 can only address 48 bits or 32 Terrabytes of RAM locations (256TB RAM) as opposed to the 2 Exabytes of memory locations (16 Exabytes of RAM) if it had a 64 bit address.

??? What is this misunderstanding? - AMD86-64 ISA defines a 64-bit virtual address space!

Most current mainboards are limited to 4GB ram. The single core Athlon64 as a current hardware implementation of AMD86-64, has a memory controller that can handle 16GB ram, and has an address bus that is 40 bits wide, good for a 1TB range.

The AMD86-64 ISA defines a 64-bit virtual address space. The instruction address format is 64-bit. In the current implementation, a memory block needs to be allocated in the lower 48 bits. Addresses in this 48-bit range will be mapped to 40-bit hardware addresses. A 256TB range to put a few GB memory into, is hardly limiting.

But software is required to generate a 64-bit address. The 16 bits (or whatever, depending upon how many bits are used) above 48 bits (or whatever) must be in canonical form and cannot be used for any other purpose. This is important. When the first implementation of Motorolas 32-bit 68k, the MC68000, only used 24 bits of the address (16MB, again perfectly valid, at the time when 1MB was outrageous) many software engineers felt clever by producing corrupt software. Software that would not run on later CPUs in the series. This will not happen with AMD86-64.

A virtual space is not hardware memory size. It is freedom to allocate memory blocks in. The ISA ultimately provides for 4PB hardware memory. Addresses in a 64-bit space will be mapped to a 52-bit hardware address bus.


I'm in the camp that the width of the general purpose registers is what classifies a CPU as far as bits.

Fair enough. I'm sure you're in good company. There is good argument for that too. But I don't feel "the Athlon 64 can only address..." is it.
The main advantage of that interpretion, is that it's the original, and that it's consistent, when you look back on previous architectures.

Edit: The negative sides are that it's completely meaningless (unless you make the connection to addressing). And that this version of the -bit classification is fully to blame for the fact that most people have completely misunderstood what 16, 32 and 64 bit means in terms of computing technology.
And having a *definition* that directly leads to that people with interest in computers have a wrong, false understanding, is IMHO bad.

You do realize though, don't you, - that the limitations of 32-bit software are entirely due to the fact that it uses 32 bits for addressing! 32-bit software is NOT limited in any way by the fact it only has access to 32 bit wide operations on integers in a "general purpose register". Exactly the same again when considering 16-bit and 32-bit.
 

icarus4586

Senior member
Jun 10, 2004
219
0
0
The Athlon 64 can only address 48 bits or 32 Terrabytes of RAM locations (256TB RAM) as opposed to the 2 Exabytes of memory locations (16 Exabytes of RAM) if it had a 64 bit address.
First, that sentence makes no sense at all. "32 terabytes of memory locations" is nonsensical. A 48 bit memory address space results in 2^48 unique memory locations, giving 256 trillion addresses, meaning 256TB RAM. AMD64 / x86-64 / EM64T, whatever you want to call it, specifies 16 64 bit registers. In 64 bit real mode, memory addresses take up 64 bits even if only 48 are used.

Regardless, what Vee said is right. The 48 bit limitation is not a limitation in the ISA itself, just the current implementation of the ISA.

And I meant to say more about the "bitness" of consoles in my previous post. By "marketing BS," what I meant was that there's always some justification for assigning a certain system a certain "bitness" classification. The way that different companies use it in different ways (the bus width, the operand width of SIMD operations, etc.) is not really important. It impacts performance, but it doesn't define a more advanced architecture.
 

Varun

Golden Member
Aug 18, 2002
1,161
0
0
Originally posted by: icarus4586
The Athlon 64 can only address 48 bits or 32 Terrabytes of RAM locations (256TB RAM) as opposed to the 2 Exabytes of memory locations (16 Exabytes of RAM) if it had a 64 bit address.
First, that sentence makes no sense at all. "32 terabytes of memory locations" is nonsensical. A 48 bit memory address space results in 2^48 unique memory locations, giving 256 trillion addresses, meaning 256TB RAM. AMD64 / x86-64 / EM64T, whatever you want to call it, specifies 16 64 bit registers. In 64 bit real mode, memory addresses take up 64 bits even if only 48 are used.

Regardless, what Vee said is right. The 48 bit limitation is not a limitation in the ISA itself, just the current implementation of the ISA.

And I meant to say more about the "bitness" of consoles in my previous post. By "marketing BS," what I meant was that there's always some justification for assigning a certain system a certain "bitness" classification. The way that different companies use it in different ways (the bus width, the operand width of SIMD operations, etc.) is not really important. It impacts performance, but it doesn't define a more advanced architecture.

OK, what I meant to say was 2^48 byte addressable memory locations, or 256TB of RAM at 1 byte per location. I had messed up my math and then edited, but should have just re-wrote what I put down.
 

Varun

Golden Member
Aug 18, 2002
1,161
0
0
Originally posted by: Vee
You do realize though, don't you, - that the limitations of 32-bit software are entirely due to the fact that it uses 32 bits for addressing! 32-bit software is NOT limited in any way by the fact it only has access to 32 bit wide operations on integers in a "general purpose register". Exactly the same again when considering 16-bit and 32-bit.

Yes, the 4GB limit is finnaly affecting us, and I do realise that the performance increase of 64 bit registers is not the reason for the move to 64 bit computers.
 

Googer

Lifer
Nov 11, 2004
12,576
7
81
Originally posted by: the splat in the hat
oh man flashbacks , that atari rocked !


Yeah but too bad the console and it sales number sunk like a stone.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Originally posted by: icarus4586
The only valid reason for a CPU to be referred to as XX-bit should be based on its memory address space.

The N64 had a MIPS 4300 CPU that was 64 bit in this sense; its hardware registers could each hold 64 bits of data. Thus it had a 64 bit memory space. Since it only had 4 or 8 MB of RAM, it wasn't really useful. I believe the processor was very similar to that used in SGI workstations.

While I'm not so sure about the Jaguar, I do know that the Motorola 68000 has 16 32 bit registers, allowing for a 32 memory address space. Its external bus was 16 bits wide. It had 2 16 bit ALUs, so in order for it to do 32 bit math operations, it had to use both ALUs at once, sacrificing speed.

The Athlon 64 is "64 bit" in this sense. AMD updated the i386 ISA to have 16 64 bit registers instead of 8 32 bit registers. When running in compatibility mode, only these 8 are used.

As far as more recent consoles like the Gamecube, PS2, and XBox, their "bitness" is marketing BS. Maybe they have 128 bit data buses. Big deal. Maybe they can perform operations with 128 bit operands. Big deal. Marketers like to quote big numbers. If the Dreamcast is 128 bit, then obviously it's twice as good as the N64, right?

The XBox has a 32 bit x86 CPU. The Gamecube has a 32 bit PowerPC CPU. I'm not sure about the PS2, the Emotion Engine is weird. It's based on the MIPS ISA, that's about all I know.

Bottom line? Most of the stuff you hear about consoles is marketing.

Last generaiton, mhz, polygons, and pixels were advertised.

This gen, cores and flops are advertised.