Why are we still running at 32bits?

spectrecs

Junior Member
Sep 26, 2004
14
0
0
A friend and I were discussing this the other day, why are CPU's running at 32 (and now 64) bits instead of like... 128 or something even bigger? If I'm not mistaken, there have been some GPU's built a few years ago before 64 bit CPU's took off... such as the N64's GPU (that ran at 64 bit, right?), and wasn't one of the playstation's GPU's either 64 or something higher?

I know very little about modern chip design, so if someone could shed some light on the topic.
 

EightySix Four

Diamond Member
Jul 17, 2004
5,122
52
91
This conversation has been had. Modern CPU's do run at those bit points, instructions like MMX SSE SSE2 SSE3 etc. all run in "64bit" and higher. What a bit does is allow more memory to be adressed, at least in the way ur thinking about it. AMD's cpu's have alot more optimisations then just the "64bit" side to things, like doubling the amount of registers, that's why there's a performance increase. Takin a 32bit cpu and making it 64bit, would not help at all...
 

byosys

Senior member
Jun 23, 2004
209
0
76
Certain tasks simply can not use more than 32 or 64 bits efficently, meaning there simple reason is that there is no maket for it. As far as I know, there is no software written for a 128 bit CPU. There are, however, certain extensions to processor that run in 64 and 128 bits. Alti Vec runs in 128 bits, but it's uses are limited. It all comes down to supply and demand. If there was enough of a market for a 128 bit CPU, then someone (IBM would be my bet, but thats strictly a guess) would build one.
 

jhu

Lifer
Oct 10, 1999
11,918
9
81
Originally posted by: spectrecs
A friend and I were discussing this the other day, why are CPU's running at 32 (and now 64) bits instead of like... 128 or something even bigger? If I'm not mistaken, there have been some GPU's built a few years ago before 64 bit CPU's took off... such as the N64's GPU (that ran at 64 bit, right?), and wasn't one of the playstation's GPU's either 64 or something higher?

I know very little about modern chip design, so if someone could shed some light on the topic.

well, let me ask you this: what reason is there to go higher? currently there is no need. when you say "64-bit" you are referring to 64-bit integer operations and memory access. not many home users really need more than 2^32 mb of memory.

as a side note, the n64 cpu is a 64-bit mips processor. why? i have no idea. the machine only had 4mb of ram so there really wasn't a point to using a 64-bit processor. if anything, it would be a hinderance due to the memory constraints because the code size would be larger. as for the ps2, i don't think it has a gpu, rather it has multiple vector units on the cpu and a big, fat bandwidth pipe to the frame buffer.
 

spectrecs

Junior Member
Sep 26, 2004
14
0
0
I thought it also had something to do with floating point operations as well. The more bits, the easier it was for the processor to deal with floating points.
 

tinyabs

Member
Mar 8, 2003
158
0
0
Originally posted by: spectrecs
I thought it also had something to do with floating point operations as well. The more bits, the easier it was for the processor to deal with floating points.

There was a 80387 math co-processor 2 decades ago. Now it is integrated inside the modern CPU. There is also Weitek P9000(?) which was used for specialized applications. Float-point operation is uses IEEE number so whether 32 or 64-bits processor is used is irrevelant. When we talk about 64-bit processor, we are talking about integer and memory operand size, not FP, MMX or SSE.

I did a program in 1996 to use 128-bit move operation using float-point operator. The performance lt was similar to REP MOVD (32-bit) because it was bound by RAM bandwidth. It really come down to the specs of the system; CPU only plays a part.

If 64-bit is good, turn the knob to 640-bits. 640-bits would be better than just 64-bits. No need to stop at 64-bits. You see the point?
 

tinyabs

Member
Mar 8, 2003
158
0
0
If 64-bit is good, turn the knob to 640-bits. 640-bits would be better than just 64-bits. No need to stop at 64-bits. You see the point?

Build a ethernet port using 1024-bit processor. It might be 32 time faster than 32-bit ethernet port. Why? Cos you can easily copy packets in a few operations.
 

Shenkoa

Golden Member
Jul 27, 2004
1,707
0
0
Reasons that 32 Bit is efficient and 64 Bit is ahead of its time.



1. Most BUS's in your system are based on a 32 Bit BUS Width. Adressing in 64 would be pointless.
2. 32 Bits can adress up to 4 GB of RAM, nothing uses this much
3. You can double the adressing ammount by 4 X 60.5 but if the speed of the processor is too slow then it wont matter any way, I can carry 2 buckets of water half of the speed. The proc can carry data in 64 Bit chunks but I dont think it will go any faster.
4. Double the registers but it wont matter cause there is nothing big enough to fit in them any how.
 

imported_piglet

Junior Member
Sep 27, 2004
1
0
0
Diminishing returns.

It starts with a debate over architecture CISC vs RISK, moves on to BUS architecture and arrives at cost / benefit.

(1) Complex vs simple (reduced) instruction set. There are only so many instructions that can be packed into a single instruction? and further attempts to enhance instruction complexity beyond today?s standards achieve ever diminishing returns.

(2) I guess that makes the higher return either on the addressing range or on the amount of data returned (either multiple instructions or pure data) either for predictive caching or just for crunching.

(3) In practice its difficult to make effective use of more than 4Gbytes of data. I?m involved in a s/w application that can use this and more and its not an easy decision about how much memory to use and what physical architecture to deploy. In practice dealing with this much memory (including 64 bits) takes a lot of CPU grunt.

(3a) The driver in H/W today is games. No games use 1GB let alone 4GB.

(4) Software. There is currently a lack of software for the existing 64 bit architectures. This is not just the OS but application software, so today writing a 64 bit application is a non-starter on many platforms. There is a chicken-egg situation. Fortunately (for marketing reasons) AMD have moved the game on and we?ll soon see if 64 bit is better. I read somewhere that performance tests of Mircrosoft?s BETA XP64 showed that it was slower than 32 bit ? although I understand that this is being re-written ? and is clearly hampered by lack of 3rd party support.

In short both inertia and diminishing returns limit the need, viability and speed at which 64bit applications will be developed and become mainstream.

 

Vee

Senior member
Jun 18, 2004
689
0
0
Originally posted by: spectrecs
A friend and I were discussing this the other day, why are CPU's running at 32 (and now 64) bits instead of like... 128 or something even bigger? If I'm not mistaken, there have been some GPU's built a few years ago before 64 bit CPU's took off... such as the N64's GPU (that ran at 64 bit, right?), and wasn't one of the playstation's GPU's either 64 or something higher?

I know very little about modern chip design, so if someone could shed some light on the topic.

Here we go again... :)

First of all, we are not at all still at 32-bit, - in the meaning you are understanding it!.
Memory data bus, for instance, has been 64-bit wide since the P5-Pentium.
Now it's quite common with dual-channel, 128-bit wide memory interface.
Data units that are 64 bit's wide, like doubleprecision floating point numbers, have been handled in 64-bit wide registers since...
actually I don't quite know, but at least since the introduction of the '387 FPU, later incorporated in the CPU.
Beyond that there's not been much need to handle longer data units. A char or color channel is no more than a byte. A unicode char that can represent all culture's sets of chars is only 16 bit, 32 bit integers are enough for most uses, counters, indexes, etc.

But wait, (rethorical) can't we process many shorter data units simultaneously, like eight bytes (8-bit) a a time in 64-bit registers?
- Oh, yes indeed! - It's called MMX! Introduced on later P5-Pentiums. Then there's multiple singleprecision FP math, 3DNow (introduced on the K6), SSE, 3DNow+ and finally 128-bit wide vector operations (it's called vector when you pack several units together like this, in a longer segment), both integer and floating point in SSE2/3.

So by your view of bit's we are already at 128 bits. And I'm sure we are going to get wider still.

It's called 32-bit computing and 32-bit code for a different reason. The binary machine code, refer to every piece of data, including the instructions themselves, in its own little virtual world, with 32-bit numbers. These 32-bit numbers are the virtual addresses of every single byte in the program's virtual space.

And 32 bits are not enough anymore. They don't go farther than 4GB, which again means, for various reasons, only about ~1.6 GB total of code and data can be addressed by one Windows32 program.

So we need a new CPU, that can execute instructions referring to data with 64-bit numbers instead. That's what 64-bit computing is all about.
Also since we now need to handle a lot of 64-bit integer numbers in the CPU, it's time to get 64 bit long Integer registers too.
 

Falloutboy

Diamond Member
Jan 2, 2003
5,916
0
76
Originally posted by: Shenkoa
Reasons that 32 Bit is efficient and 64 Bit is ahead of its time.



1. Most BUS's in your system are based on a 32 Bit BUS Width. Adressing in 64 would be pointless.
2. 32 Bits can adress up to 4 GB of RAM, nothing uses this much
3. You can double the adressing ammount by 4 X 60.5 but if the speed of the processor is too slow then it wont matter any way, I can carry 2 buckets of water half of the speed. The proc can carry data in 64 Bit chunks but I dont think it will go any faster.
4. Double the registers but it wont matter cause there is nothing big enough to fit in them any how.

I agree with most of this but 4gb of ram is becoming a more real problem. my workstations at work use 2gb. servers sometimes push as much as 8gb or more. in 2 or 3 years my guess 4gb will be what 1gb is now
 
Jun 18, 2004
105
0
0
Originally posted by: jhu
Originally posted by: spectrecs
A friend and I were discussing this the other day, why are CPU's running at 32 (and now 64) bits instead of like... 128 or something even bigger? If I'm not mistaken, there have been some GPU's built a few years ago before 64 bit CPU's took off... such as the N64's GPU (that ran at 64 bit, right?), and wasn't one of the playstation's GPU's either 64 or something higher?

I know very little about modern chip design, so if someone could shed some light on the topic.

well, let me ask you this: what reason is there to go higher? currently there is no need. when you say "64-bit" you are referring to 64-bit integer operations and memory access. not many home users really need more than 2^32 mb of memory.

as a side note, the n64 cpu is a 64-bit mips processor. why? i have no idea. the machine only had 4mb of ram so there really wasn't a point to using a 64-bit processor. if anything, it would be a hinderance due to the memory constraints because the code size would be larger. as for the ps2, i don't think it has a gpu, rather it has multiple vector units on the cpu and a big, fat bandwidth pipe to the frame buffer.


The PS2 does have a GPU.

The PS2 has a CPU a a GPU and 2 vector units.

I think the PS2 also nicely shows how lazy programmers are today as it has a bucket load of raw processing power but you have to write very clever code that easily lends itself to parrallelism to make use of it.
 

Smilin

Diamond Member
Mar 4, 2002
7,357
0
0
A real short answer:

I think what you're getting at is since we already have the technology to do so why not just jump straight to a 128 or 256bit system?

The reason is that there is additional overhead involved and some operations will also work fine in just 32 bits. For instance running your 32bit OS is PAE mode allows you to use up to 64 bits of addressable memory. However it also chops the number of available page table entries in half (very roughly). This is just one tiny example.

Systems are getting fast enough now that the transition to 64bit computing is becoming sensible. If we had transitioned to 64bit computing back in say the 386 days the system would choke on it's own overhead. If we reach 10Ghz without transitioning to 64bits the system will be choking on narrow busses.

No, an increase in bits is going to have to go hand in hand with increases in performance.
 

imgod2u

Senior member
Sep 16, 2000
993
0
0
Originally posted by: jhu
Originally posted by: spectrecs
A friend and I were discussing this the other day, why are CPU's running at 32 (and now 64) bits instead of like... 128 or something even bigger? If I'm not mistaken, there have been some GPU's built a few years ago before 64 bit CPU's took off... such as the N64's GPU (that ran at 64 bit, right?), and wasn't one of the playstation's GPU's either 64 or something higher?

I know very little about modern chip design, so if someone could shed some light on the topic.

well, let me ask you this: what reason is there to go higher? currently there is no need. when you say "64-bit" you are referring to 64-bit integer operations and memory access. not many home users really need more than 2^32 mb of memory.

as a side note, the n64 cpu is a 64-bit mips processor. why? i have no idea. the machine only had 4mb of ram so there really wasn't a point to using a 64-bit processor. if anything, it would be a hinderance due to the memory constraints because the code size would be larger. as for the ps2, i don't think it has a gpu, rather it has multiple vector units on the cpu and a big, fat bandwidth pipe to the frame buffer.

The N64 was 64-bit because it did all of its graphics calculations in integer, not FP. So it needed 64-bit ALU's. Modern gaming has almost all switched over to FP which provides more flexibility, precision and speed. Since FP is traditionally decoupled from the memory registers on an MPU, its "bitness" doesn't affect addressing space and, therefore, being able to do 64-bit FP doesn't make an MPU "64-bit".
 

jhu

Lifer
Oct 10, 1999
11,918
9
81
Originally posted by: imgod2u
Originally posted by: jhu
Originally posted by: spectrecs
A friend and I were discussing this the other day, why are CPU's running at 32 (and now 64) bits instead of like... 128 or something even bigger? If I'm not mistaken, there have been some GPU's built a few years ago before 64 bit CPU's took off... such as the N64's GPU (that ran at 64 bit, right?), and wasn't one of the playstation's GPU's either 64 or something higher?

I know very little about modern chip design, so if someone could shed some light on the topic.

well, let me ask you this: what reason is there to go higher? currently there is no need. when you say "64-bit" you are referring to 64-bit integer operations and memory access. not many home users really need more than 2^32 mb of memory.

as a side note, the n64 cpu is a 64-bit mips processor. why? i have no idea. the machine only had 4mb of ram so there really wasn't a point to using a 64-bit processor. if anything, it would be a hinderance due to the memory constraints because the code size would be larger. as for the ps2, i don't think it has a gpu, rather it has multiple vector units on the cpu and a big, fat bandwidth pipe to the frame buffer.

The N64 was 64-bit because it did all of its graphics calculations in integer, not FP. So it needed 64-bit ALU's. Modern gaming has almost all switched over to FP which provides more flexibility, precision and speed. Since FP is traditionally decoupled from the memory registers on an MPU, its "bitness" doesn't affect addressing space and, therefore, being able to do 64-bit FP doesn't make an MPU "64-bit".

that would be interesting if it is indeed true. but i'm not so sure that it is. the n64 uses an nec r4300i processor which does have an fpu. take a look here. why would programmers use integer for polygon setup and manipulation when the fp is available and fairly powerful?
 

itachi

Senior member
Aug 17, 2004
390
0
0
ok.. you guys need to stop interchanging the alu width with the amount of memory that can be allocated, there is absolutely no correlation. the max addressable memory is dependant on the width of the address bus.. and pentium 4's have a 36-bit address bus (enabled through physical address extensions). so pentium 4's can hold a max of 64gb of memory. and athlon 64's have a 40-bit physical address space, 1024gb is the max addressable space. however, you can't use it without one of the windows server editions.. other ones don't have pae support.

as for why we're just starting to transition into 64-bits.. there's been no need. increasing from 32 to 64-bit shortens execution time for integer calculations and basic floating point calculations only when the additional accuracy is necessary. but at the same time it complicates the hell out of things for hardware based programmers and engineers needlessly. it'd be easier and cheaper to just increase the clock speed, modify the core, or come out with a new processor.

if processors doubled their alu width every 3 years.. how pissed do you think people would be? you buy a piece of software and 3 years later you can't use it without emulation.. then to make it backwards compatible, microsoft would have to keep the operating system at the lowest level. a 128-bit processor running a 128-bit version of windows based off 16-bit dos.
Systems are getting fast enough now that the transition to 64bit computing is becoming sensible. If we had transitioned to 64bit computing back in say the 386 days the system would choke on it's own overhead. If we reach 10Ghz without transitioning to 64bits the system will be choking on narrow busses.
no, it wouldn't. if it were 64-bits, it would still process 8 and 16-bits in the same way it had before. if it were trying to add 2 64-bit numbers (qword), the 16-bit processor would have to split them up into 4 segments (4 words).. add each word, check carry bit (all but last)- if it's set increment one of the words.. then move onto the next word. if it was 64-bits.. it would add the 2 qwords- the end. overhead would've been reduced.
 

Gioron

Member
Jul 22, 2004
73
0
0
This has been hinted at and talked around, but its worth saying explicitly and simply:
Die size is crucial. Using 64 bits instead of 32 bits for everything will signifigantly increase the die size of a chip, which in turn signifigantly lowers the yield. Not only can you fit less chips on a single slice of silicon if the die size is larger, you also have a much higher chance of defects. The exact math of how bad things get as you increase die size will have to wait until I'm not too lazy to look it up again, but take my word for it, it gets ugly quick.

To visualize why this is so, imagine taking a piece of paper, drawing 1" squares on it, and then posting it on a wall. Now get a shotgun, walk about 20' back and fire a shot at it. Every place a pellet hit would be a defect on the wafer, but there are bound to be plenty of undamaged squares left. Now try the same thing again, but use 6" squares. Odds are you'll have at most 1 square thats defect free, if that. (Disclaimer: Don't try this at home. Not responsible for any property damage that may result. Shotgun spread may vary depending on model, consult your local gun shop for details.)

There are some other engineering details, but they've already been discussed, and they don't have nearly the impact as this one, simple fact. Since die size is so critical to yield, manufacturers will only increase the size when there is a signifigant improvement from doing so. If 99% of your calculations do just fine on 32 bit, and the other 1% of your calculations can be emulated in 32 bit to run a bit slower, there isn't much reason to increase your integer size. AMD is looking ahead and seeing that in the future, 99% of operations will still do just fine on 32 bit processors, but that 1% of operations will be literally impossible with the current architecture, so its best to design the improvements now and work out the bugs before its absolutely necissary. If it weren't for the memory size limit of the current architecture, they'd probably be better off adding an extension like SSE to give their processor more registers etc. and leave everything at 32 bits. They could probably eek out the same increase in speed with much less real estate overhead on the chip, and a higher yield as a result.
 

imgod2u

Senior member
Sep 16, 2000
993
0
0
Originally posted by: jhu
Originally posted by: imgod2u
Originally posted by: jhu
Originally posted by: spectrecs
A friend and I were discussing this the other day, why are CPU's running at 32 (and now 64) bits instead of like... 128 or something even bigger? If I'm not mistaken, there have been some GPU's built a few years ago before 64 bit CPU's took off... such as the N64's GPU (that ran at 64 bit, right?), and wasn't one of the playstation's GPU's either 64 or something higher?

I know very little about modern chip design, so if someone could shed some light on the topic.

well, let me ask you this: what reason is there to go higher? currently there is no need. when you say "64-bit" you are referring to 64-bit integer operations and memory access. not many home users really need more than 2^32 mb of memory.

as a side note, the n64 cpu is a 64-bit mips processor. why? i have no idea. the machine only had 4mb of ram so there really wasn't a point to using a 64-bit processor. if anything, it would be a hinderance due to the memory constraints because the code size would be larger. as for the ps2, i don't think it has a gpu, rather it has multiple vector units on the cpu and a big, fat bandwidth pipe to the frame buffer.

The N64 was 64-bit because it did all of its graphics calculations in integer, not FP. So it needed 64-bit ALU's. Modern gaming has almost all switched over to FP which provides more flexibility, precision and speed. Since FP is traditionally decoupled from the memory registers on an MPU, its "bitness" doesn't affect addressing space and, therefore, being able to do 64-bit FP doesn't make an MPU "64-bit".

that would be interesting if it is indeed true. but i'm not so sure that it is. the n64 uses an nec r4300i processor which does have an fpu. take a look here. why would programmers use integer for polygon setup and manipulation when the fp is available and fairly powerful?

Interesting, however, the MIPS 4k series was the first to incorporate an FPU on-board and the performance of FP wasn't increased until the 5k series:
http://www.brainyencyclopedia....ml#MIPS%20CPU%20family
Also, keep in mind that many games were done in integer code in this time period before full-blown FPU's were put on-board chips so the switch-over to FP wouldn't happen instantaneously.
 

Vee

Senior member
Jun 18, 2004
689
0
0
Originally posted by: itachi
ok.. you guys need to stop interchanging the alu width with the amount of memory that can be allocated, there is absolutely no correlation.
Your introductory statement here, I'm afraid I have to say, - I cannot agree with.
the max addressable memory is dependant on the width of the address bus.. and pentium 4's have a 36-bit address bus (enabled through physical address extensions). so pentium 4's can hold a max of 64gb of memory. and athlon 64's have a 40-bit physical address space, 1024gb is the max addressable space. however, you can't use it without one of the windows server editions.. other ones don't have pae support.
The rest of your information, while correct, is incomplete, in a way I would think would be misleading for anyone not already having good knowledge about these things.
as for why we're just starting to transition into 64-bits.. there's been no need. increasing from 32 to 64-bit shortens execution time for integer calculations and basic floating point calculations only when the additional accuracy is necessary. but at the same time it complicates the hell out of things for hardware based programmers and engineers needlessly. it'd be easier and cheaper to just increase the clock speed, modify the core, or come out with a new processor.
There is a need.

64-bit integer operations and registers have everything to do with how much memory can be addressed. You're missing the point. It's not directly about addressable physical ram, it's about the apps virtual space. At least 99% of the reason for 64-bit integer ISA, is for handling 64-bit pointers. Very close to 100% of the reason for 64 bit pointers, is using binary instructions with 64 bit long addresses. Again close to 100% of the reason for binary code with 64-bit address format, is indeed addressable memory.

But let's start from the beginning. Let's consider some serious Windows 32-bit applications, like Lightwave, Maya, Working Model, Tebis, Catia. It's quite possible, if you're ambitious, to run out of memory with these apps. When (rethorical) do we run out of memory, and why? - We will run out of memory somewhere at 1.5GB-1.8GB. Why? - In order to be able to allocate some memory to an app, that memory have to be mapped to the apps virtual space. That space is 2GB. The 32 bits of the virtual address are good for 4GB, but we also have to map in OS resources (1GB) and shared resources (1GB).

(Two notes here: It seems the use of paging, when mapping memory, have caused people to forget fragmentation. It's true you don't get fragmented physical memory any more, but we still get full fragmentation of the virtual space. Secondly, it's common to interject that a Windows boot option allows giving the shared space to the app instead, resulting in 3GB virtual space. This hardly qualifies to be called 'bandaid'. I wouldn't take the word "solution" in my mouth.)

So can anything be done to give the user more memory with an app like these?

- No! The application has to be ported to a different software platform!

Two options can be suggested. Go back to *essentially "16-bit" code*, that is, - segmented addressing. Like Oracle. Especially Intel have been doing a lot to pretend that this is viable. Not so much because they believe in it, they don't. But it helps keeping people complacent about buying 32-bit CPUs, when they hear all this pile of [censored at AT], about PAE and 36-bit physical addressbus. Perhaps they suffer the illusion their 32-bit apps are good for 4GB ram? Perhaps they suffer the illusion PAE is suddenly magically going to make more than 4 GB available?

I promise this will not happen. MS will not do any OS providing a PAE general program model. And even if they were, - noone is porting Windows32 apps to PAE-segments. And noone is doing a PAE-Linux. Noone wants this. It's as horrible as old Windows16 on '286. Awkward, bugprone and expensive in every sense.

Instead we have the much better option of porting to a 64-bit code format, by which we primarily mean that the instruction's address format is 64 bits long. That means we need a new CPU with a new instructionset, That's neat and easy. AMD provided it. MS Windows is doing the 64-bit, and Linux is too. And we can keep the essentially same linear software format, that the '386 introduced.

Finally, x86-64 provides for addressing 4 PetaBytes. Those 40 bits and 1 Terabyte is only a current hardware implementation. x86 ('386) only provides linear addressing for 4 GB. PAE and the 36-bit hardware address bus don't change that.

 

jhu

Lifer
Oct 10, 1999
11,918
9
81
Originally posted by: imgod2u

Interesting, however, the MIPS 4k series was the first to incorporate an FPU on-board and the performance of FP wasn't increased until the 5k series:
http://www.brainyencyclopedia....ml#MIPS%20CPU%20family
Also, keep in mind that many games were done in integer code in this time period before full-blown FPU's were put on-board chips so the switch-over to FP wouldn't happen instantaneously.

but the n64 also had a fairly advanced gpu at the time. take a look at super mario 64. i doubt that nintendo would only use integer calculations for all the polygons. the n64 was released in 1996. although a lot of pc games were done in integer only, there were already games that required an fpu such as quake or duke nukem 3d. with respect to the n64, the hardware was already set, so developers didn't need to worry about whether or not an fpu was present. (unlike in the pc world).
 

imgod2u

Senior member
Sep 16, 2000
993
0
0
Originally posted by: jhu
Originally posted by: imgod2u

Interesting, however, the MIPS 4k series was the first to incorporate an FPU on-board and the performance of FP wasn't increased until the 5k series:
http://www.brainyencyclopedia....ml#MIPS%20CPU%20family
Also, keep in mind that many games were done in integer code in this time period before full-blown FPU's were put on-board chips so the switch-over to FP wouldn't happen instantaneously.

but the n64 also had a fairly advanced gpu at the time. take a look at super mario 64. i doubt that nintendo would only use integer calculations for all the polygons. the n64 was released in 1996. although a lot of pc games were done in integer only, there were already games that required an fpu such as quake or duke nukem 3d. with respect to the n64, the hardware was already set, so developers didn't need to worry about whether or not an fpu was present. (unlike in the pc world).


Well, the FPU has other uses. According to the site, the MIPS processor did most of the audio processing. The GPU was also used solely for rendering while the MIPS did the polygon setup. I would guess that some games were written using integer while others using FP (later N64 games looked very different than Mario64 or Zelda64).
 

itachi

Senior member
Aug 17, 2004
390
0
0
Originally posted by: Vee
Two options can be suggested. Go back to *essentially "16-bit" code*, that is, - segmented addressing. Like Oracle. Especially Intel have been doing a lot to pretend that this is viable. Not so much because they believe in it, they don't. But it helps keeping people complacent about buying 32-bit CPUs, when they hear all this pile of [censored at AT], about PAE and 36-bit physical addressbus. Perhaps they suffer the illusion their 32-bit apps are good for 4GB ram? Perhaps they suffer the illusion PAE is suddenly magically going to make more than 4 GB available?

I promise this will not happen. MS will not do any OS providing a PAE general program model. And even if they were, - noone is porting Windows32 apps to PAE-segments. And noone is doing a PAE-Linux. Noone wants this. It's as horrible as old Windows16 on '286. Awkward, bugprone and expensive in every sense.
you promise? :eek:.. ok jokes aside, not sure what you mean by "general program model", explain for me if you don't mind (not the terminology, but the model given by ms).
Finally, x86-64 provides for addressing 4 PetaBytes. Those 40 bits and 1 Terabyte is only a current hardware implementation. x86 ('386) only provides linear addressing for 4 GB. PAE and the 36-bit hardware address bus don't change that.
you're right about one thing, the x86-64 architecture does support only 4 pb, in legacy-mode. the 52-bit lmitation only applies when the software is running in legacy-mode (same as now.. with 40-bits, which i failed to mention).. and i suggest you read more about pae. pae on vs pae off only adds 1 more level to the mechanism (which are 3-level for intel's). 4 ticks vs 3 ticks.. an increase of 1 doesn't constitute an exponential or logarithmic time complexity.


i would respond more to your message.. but i got exams coming up and i've already spent too much time already. next time.
 

Vee

Senior member
Jun 18, 2004
689
0
0
Originally posted by: itachi
, not sure what you mean by "general program model", explain for me if you don't mind (not the terminology, but the model given by ms).
Well, I'm considering an application program model like Windows16, Windows32s, Windows32, Windows64. This is vague, I know, since MS do some support for PAE, but bear with me.
you're right about one thing, the x86-64 architecture does support only 4 pb, in legacy-mode. the 52-bit lmitation only applies when the software is running in legacy-mode (same as now.. with 40-bits, which i failed to mention)
More useless, somewhat correct, but incomplete and potentially misleading information, I'm afraid.
Theorethically, x86-64 could allow you to disable paging and mmu, and put out the 64-bit address directly on the physical bus.
As why anyone would want to do this... :roll: But that is entirely irrelevant, as we will never see a x86-64 CPU with wider physical addressbus than 52 bits.
X86-64's paging mechanism will ultimately support translating 64-bit virtual addresses into a 52-bit physical space. Which is another way of saying that the x86-64 mmu will support 4PB ("only" 4PB :confused: ). Currently, it supports mapping 48-bit (64-bit canonical form) virtual addresses, into 52-bit physical space.
Again, we of course have "only" 40-bit physical addressbus on the K8.
Some future may well see some different 64-bit architecture, compatible with apps for x86-64 longmode->64-bit mode, - but dropping longmode->compatibility mode and legacy mode, - that will manage mapping memory into a physical space larger than 4PB. But that is an entirely different thing.
.. and i suggest you read more about pae. pae on vs pae off only adds 1 more level to the mechanism (which are 3-level for intel's). 4 ticks vs 3 ticks.. an increase of 1 doesn't constitute an exponential or logarithmic time complexity.
You should consider this from the application's point of view instead. A 32-bit application utilizing PAE must manage the virtual segments itself. I guess you weren't around in the days of 16-bit segmented addressing, since you're so flippant about this? ;) . Using redundant far pointers, would simplify, but also dramatically reduce performance. By a magnitude.
 

itachi

Senior member
Aug 17, 2004
390
0
0
Originally posted by: Vee
More useless, somewhat correct, but incomplete and potentially misleading information, I'm afraid.
Theorethically, x86-64 could allow you to disable paging and mmu, and put out the 64-bit address directly on the physical bus.
As why anyone would want to do this... :roll: But that is entirely irrelevant, as we will never see a x86-64 CPU with wider physical addressbus than 52 bits.
X86-64's paging mechanism will ultimately support translating 64-bit virtual addresses into a 52-bit physical space. Which is another way of saying that the x86-64 mmu will support 4PB ("only" 4PB :confused: ). Currently, it supports mapping 48-bit (64-bit canonical form) virtual addresses, into 52-bit physical space.
Again, we of course have "only" 40-bit physical addressbus on the K8.
Some future may well see some different 64-bit architecture, compatible with apps for x86-64 longmode->64-bit mode, - but dropping longmode->compatibility mode and legacy mode, - that will manage mapping memory into a physical space larger than 4PB. But that is an entirely different thing.
thought i knew what i was talking about.. obviously not. you really know your stuff. and i hope i never have to learn this, ever! my head hurts trying to piece it all together.. well, now i know not to take operating systems design.. thanks :D.
You should consider this from the application's point of view instead. A 32-bit application utilizing PAE must manage the virtual segments itself. I guess you weren't around in the days of 16-bit segmented addressing, since you're so flippant about this? ;) . Using redundant far pointers, would simplify, but also dramatically reduce performance. By a magnitude.
what do you mean by redundant far pointers? i don't understand enough about paging to comment on it anymore (and i don't think i knew enough to comment from the beginning hahah).. but, how is it that the 32-bit pointer becomes redundant? i really don't see where its becoming repetetive.. are you referring to the extra level of decoding that it has to go through to access the memory address?