The importance of 64 bit?

PDenton

Junior Member
Jun 4, 2006
4
0
0
Well I have a question about notebook processors. I'm looking at a fairly nice laptop (about 1800 dollars worth) with a Core Duo.

The thing I'm worried about is that the Core Duo is only 32 bit... Should I wait to pick up a Turion 64 X2 or a Core Duo 2? If thats the course I should take, how long will I have to wait? I got really excited about getting a new computer and I dont want to wait long, but it seems like I should.

Will there be a big difference between Vista 64 / 32 bit, and do you guys think 64 bit will have a huge impact on the future of gaming?

Thanks for any help.

 

Kakumba

Senior member
Mar 13, 2006
610
0
0
64 bit is irrelevant to the majority of user at the moment. there will still be 32 bit versions of Vista (in fact, of the millions of flavours of Vista, how many ARE 64 bit?), so I personally dont think 64 bit support is important, unless you are planning on running X64 (which I doubt).

So, go for it, unless the other laptops will be better in other areas, and are still within your price range (and you can wait that long)
 

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
The more memory you use (RAM and swapfile), the faster a 64bit app will perform when compared to a 32bit app.
Vee has written an excellent post explaining just that...
Post

So, the question becomes one of what kind of laptop you plan on buying...if you are only going to use 512MB of Ram and not run large programs, then 64bit will have very little effect on you. However, many modern games are running better as you increase the amount of Ram...for these, 64 bit will have a very significant effect.
The bottom line is that it will depend on your useage.
 

FelixDeCat

Lifer
Aug 4, 2000
30,957
2,670
126
Originally posted by: Kakumba
64 bit is irrelevant to the majority of user at the moment. there will still be 32 bit versions of Vista (in fact, of the millions of flavours of Vista, how many ARE 64 bit?), so I personally dont think 64 bit support is important, unless you are planning on running X64 (which I doubt).

So, go for it, unless the other laptops will be better in other areas, and are still within your price range (and you can wait that long)

Im planning on going Vista 64 or bust.
 

ahock

Member
Nov 29, 2004
165
0
0
I dont think 64 bit CPu is of any use for notebook today or even on desktop. How many laptop and even desktop offer more that 4GB of system memory?

64 bit will only be useful once you exceed 4GB memory and that will not come in the near future, worst on laptop.
 

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
Originally posted by: ahock
I dont think 64 bit CPu is of any use for notebook today or even on desktop. How many laptop and even desktop offer more that 4GB of system memory?

64 bit will only be useful once you exceed 4GB memory and that will not come in the near future, worst on laptop.

Actually, that's not quite true...even at 2GB, 64 bit will have much better performance due to the limits of virtual addressing on 32 bit. Read the post I linked and you'll understand why...
 

Vee

Senior member
Jun 18, 2004
689
0
0
Originally posted by: Viditor
Originally posted by: ahock
I dont think 64 bit CPu is of any use for notebook today or even on desktop. How many laptop and even desktop offer more that 4GB of system memory?

64 bit will only be useful once you exceed 4GB memory and that will not come in the near future, worst on laptop.

Actually, that's not quite true...even at 2GB, 64 bit will have much better performance due to the limits of virtual addressing on 32 bit. Read the post I linked and you'll understand why...

Viditor, you may have slightly misunderstood a couple of details. Regardless, let me try to be clear about a couple of things.

The idea that 64-bit will only be useful above 4 GB memory is indeed wrong. But that is because the true limit is in fact lower. ~1.7 - 1.8 GB (absolute limit is 2 GB) for a normal Windows32 application and ~2.5 GB for an application that uses the 3GB switch.
This is where a Windows32 application will run out of memory. It will not continue to run. And it makes no difference how much RAM you have installed, 1GB or 4GB, it makes no difference because this is not an issue with hardware memory.

(If you have less hardware RAM than you need, you will have a drastic drop in performance since you will rely more on hd swap and have a lot of disk activity. This is an old wellknown thing and has nothing to do with 32 vs 64 bit.)

You can have 32-bit software that can use more memory than our current Windows32. And you can have an OS that runs such 32-bit software. And if we had that, it would perform worse than 64-bit software, yes. But we do not have that. Windows32 applications and our normal varieties of Windows cannot do this. And we will not go that way. Instead of adobting a segmented 32-bit program model to replace our current Windows and Linux software, we will instead adobt a 64-bit program model.

****

On a different issue, that I also touched - the fragmentation of virtual space, the OS cannot fix this transparently for the application. The application can try fix this itself. The API provides for moving memory blocks. But a straight program will just terminate with an out-of-memory error.

So in practice it's not much of a performance issue. 32-bit programs will simply stop running. It would have become a performance issue if we had chosen to go forward with replacing our current 32-bit software with a new different 32-bit program model instead of a new different 64-bit program model.

Beside the inherent general superiority of 64-bit addressing, a linear 64-bit model is also much more similar to our current linear 32-bit model than a segmented model would be. So it's easier to port to 64-bit as well.

****

(32 vs 64 bit is also an performance issue due to different reasons. We have a new ISA. With twice the number of visable registers, and the gp registers are truly gp, all of them. So a compiler that is able to optimize for this will produce somewhat faster executables.)
 

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
Originally posted by: Vee
Originally posted by: Viditor
Originally posted by: ahock
I dont think 64 bit CPu is of any use for notebook today or even on desktop. How many laptop and even desktop offer more that 4GB of system memory?

64 bit will only be useful once you exceed 4GB memory and that will not come in the near future, worst on laptop.

Actually, that's not quite true...even at 2GB, 64 bit will have much better performance due to the limits of virtual addressing on 32 bit. Read the post I linked and you'll understand why...

Viditor, you may have slightly misunderstood a couple of details. Regardless, let me try to be clear about a couple of things.

The idea that 64-bit will only be useful above 4 GB memory is indeed wrong. But that is because the true limit is in fact lower. ~1.7 - 1.8 GB (absolute limit is 2 GB) for a normal Windows32 application and ~2.5 GB for an application that uses the 3GB switch.
This is where a Windows32 application will run out of memory. It will not continue to run. And it makes no difference how much RAM you have installed, 1GB or 4GB, it makes no difference because this is not an issue with hardware memory.

(If you have less hardware RAM than you need, you will have a drastic drop in performance since you will rely more on hd swap and have a lot of disk activity. This is an old wellknown thing and has nothing to do with 32 vs 64 bit.)

You can have 32-bit software that can use more memory than our current Windows32. And you can have an OS that runs such 32-bit software. And if we had that, it would perform worse than 64-bit software, yes. But we do not have that. Windows32 applications and our normal varieties of Windows cannot do this. And we will not go that way. Instead of adobting a segmented 32-bit program model to replace our current Windows and Linux software, we will instead adobt a 64-bit program model.

****

On a different issue, that I also touched - the fragmentation of virtual space, the OS cannot fix this transparently for the application. The application can try fix this itself. The API provides for moving memory blocks. But a straight program will just terminate with an out-of-memory error.

So in practice it's not much of a performance issue. 32-bit programs will simply stop running. It would have become a performance issue if we had chosen to go forward with replacing our current 32-bit software with a new different 32-bit program model instead of a new different 64-bit program model.

Beside the inherent general superiority of 64-bit addressing, a linear 64-bit model is also much more similar to our current linear 32-bit model than a segmented model would be. So it's easier to port to 64-bit as well.

****

(32 vs 64 bit is also an performance issue due to different reasons. We have a new ISA. With twice the number of visable registers, and the gp registers are truly gp, all of them. So a compiler that is able to optimize for this will produce somewhat faster executables.)

Thanks for the correction Vee!
Some questions for you then...

1. You said that there is a 2GB limit for an application in 32 bit space windows, is this for a single app or the total of the threads that are running?

2. Is there a limit/performance issue for the data of the application? As an example, I work with very large video files on a regular basis...what is the effect of 32bit vs 64 bit on these?

3. To be clear on the fragmentation issue, the OS cannot fix this in 32bit virtual space (causing the 32bit app to read out-of-memory) but in 64 bit space it is no longer an issue?

Many thanks!
 

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
Originally posted by: orangat
Why only 16 gprs? Why not 32 like many other cpus?

At a guess, I'd say because of the limits x86...but I should let Vee answer.
 

Vee

Senior member
Jun 18, 2004
689
0
0
Originally posted by: Viditor
Some questions for you then...

1. You said that there is a 2GB limit for an application in 32 bit space windows, is this for a single app or the total of the threads that are running?
The limit is entirely concerned with how the application sees itself. So it's per process or application. How much the OS can handle for the sum of needs of multiple processes is a different thing. 4GB should not be a problem for modern Windows and server versions can do even more.

"threads" ?? If an application is multithreaded, all the threads run together inside the same 2GB space.

2. Is there a limit/performance issue for the data of the application? As an example, I work with very large video files on a regular basis...what is the effect of 32bit vs 64 bit on these?
Well, 64-bit should make a lot of things much easier for applications that need to handle very large data objects. I do not know the particulars in the video file case however.

3. To be clear on the fragmentation issue, the OS cannot fix this in 32bit virtual space (causing the 32bit app to read out-of-memory) but in 64 bit space it is no longer an issue?
It shouldn't ever be an issue since the space is so very much larger than the max virtual memory limit. However, *theoretically*, I believe it's possible.


 

Vee

Senior member
Jun 18, 2004
689
0
0
Originally posted by: Viditor
Originally posted by: orangat
Why only 16 gprs? Why not 32 like many other cpus?

At a guess, I'd say because of the limits x86...but I should let Vee answer.

I can't give a firm answer. But yours is probably a good guess. I believe part of the reason is maybe that it fits so elegantly with the previous op code format. Which means it costs less hardware to support both the 32-bit and the 64-bit parts of the ISA side by side.

Overall, I think the answer to "why not 32?" is the same as the answer to "why not 4096?". It's the usual tradeoff to maximize performance on any given piece of transistor real estate. Consider the "diminishing returns" factor. Also consider that this is not a RISC processor where a compiler uses the ISA to control load/store/execution in detail, but an OoO speculative execution CISC processor. The ISA is only a logical interface from the softwares point of view. The processor uses a lot of clever tricks to speed up execution. One aspect of these tricks is that the processor juggles a lot instructions, keeping them all in the air at the same time. And each instruction has its own idea of what the contents in the registers are at any time. And that idea may differ from the idea that any other instruction has. For this reason the processor internally uses hundreds of hidden registers to manage it all. That is why we say "visable" registers about the registers defined by the ISA.

An increased number of visable registers cost compexity all the way. That in turn means that we cannot afford the same amount of clever tricks. I do not know where the point is where increased number of visable registers will result in lower performance on a given die size. But it seems reasonable that such a point exists