PF Size - Optimum for 1GB ? and two partition requirements?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

imported_goku

Diamond Member
Mar 28, 2004
7,613
3
0
Originally posted by: craige4u
Goku ? I appreciated your understanding in writing such a lengthy post!
In fact I am having an 120GB SATA HD too !

So, what you think of my current pagefile settings ? PF is set to, 1536 Min. & 3072 Max. On Windows drive.

Also, as I stated earlier, I have two partition on my HD, Drive C: for windows and Drive D: for games, music etc. Whts your idea on PF on Drive: D ?

Any recommendation of the PF settings, if I upgrade it to 2GB of RAM ?
Personally, I've probably maxed out on my page file at around 1.2GB which is HUGE, thats about 2.2GB of ram being used. If you get 2GB of ram, I'd make the pagefile 2GB, using up the 4GB of the memory address. Disable pagefile, defrag, create pagefile, be sure your free space is 100% contiguous, I use diskeeper but others have had luck with perfect disk. I tried out perfect disk and IMO I like the interface of diskeeper better and IIRC D.K 10 should be just as fast/good as perfect disk. Sometimes when you defrag, the files may become contiguous but the free space is not and it's critical that your freespace is contiguous. If you have two drives, it'd be good practice to have the pagefile on the opposite drive that has all the resource intensive programs installed on it.

Edit: The placement of the PF is only useful if you've got two PHYSICAL drives which I thought you had, but I realize you have a partition so you should have the Pagefile stored on the C:\ since thats the partition closest to the outside of the platters..
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
WTF are you people doing to need 1.5GB of ram + 1.5GB page file? Running Firefox (which leaks), Azureus (which uses boatloads of ram), gaim (which leaks), apache (which for me ends up around 160MB), coLinux (which needs 256-512MB), and a Mozilla compile (up to ~512MB at a couple points in the compile), I can't get my commit charge above 1.8GB. It usually hangs around 1GB.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
If you have 2GB or more RAM, why not just disable it? Or do some programs require the page file even if you have enough RAM? I heard that some programs require the page file even if you have neough RAM only in Windows? But in linux, the page file can be disabled if you have a lot of physical RAM? Is that correct and why would that be?

NT was designed with the assumption that a pagefile will be available, hell if you remove the pagefile from NT4 it would BSOD on bootup, so there are operations that require a pagefile be present to succeed. I believe MS is 'fixing' this because the XP PE stuff can run from CD and obviously can't require a pagefile, but that's a special case and if you were to run your sytem without a pagefile all of the time all you're really doing is putting unnecessary pressure on the VM system. Instead of being able to put the least used memory pages into the pagefile and free more memory it'll have to leave them there and evict things like executables, shared libraries, etc that actually have a backing store on disk somewhere. And also because it'll be forced to keep more stuff in main memory it'll have less space for system cache so in some situations you'll actually end up creating more disk I/O.

And yes, you can disable any swap partitions or files in Linux and make it run without them. But you'll also probably end up with the OOM killer regularly killing large processes like firefox or Oo_O.
 

bsobel

Moderator Emeritus<br>Elite Member
Dec 9, 2001
13,346
0
0
1. Information that is larger than the fragments the pagefile has been distributed in (say in a relatively fragmented drive with 60 fragments in the PF with 50 of them that have sizes ranging in 1MB -160MB) now you've got a game that is loading textures in with a minimum of 10MB and with an average of 100MB. So now as the pagefile is filling up to the windows' default size, all those scattered peices that have been designated as the pagefile are slowing down the writing to of the pagefile because the texture files have to be split up into multiple peices which takes processing power and then the drive has to find those fragments, write to them and continue this process. All of this could have been so much easier with a contiguous pagefile because there would be no splitting up of textures etc. and random accessing of the drive to write to the bits of pagefile

I see where you are going, the problem is data in the page file is accsessed in very small amounts (very few long reads). In some cases having the pagefile fragmented near the part of the disk currently being accessed is actually quicker since in the way you describe the heads will always have to seek to get to the PF, where as when it's kinda intermixed with the data there it will have to seek less. The seek times become just as important (often more) than the speed boost you get from being on the outside of the drive. But it's one of those cases where you really can't say for everyone contigous or for everone fragmented is better. I agree with most of the other posters that just using the defaults is best for most users. I have yet to see a poster here really quantify (via benchmarks) any speed differences for making it contigous.

Bill

 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: goku
I'll probably get flamed by n0cmonkey because he is insistant that microsoft has made their opertating system "optimized", therefore it requires no changing of settings what so ever after leaving the box...:roll:

Please show me where I said those words.

I would argue however that it is "optimized." But there are definitely setting that need adjustment after installation.

Mini summary:
1. Because your pagefile increases with use, it becomes fragmented therefore increasing the drives' depenancy on the access times'.
2. Since your pagefile is fragmented, all programs you run that require a good amount of the pagefile, for example games, require to be split up into size corresponding with the fragments of the pagefile therefore slowing down things a bit.
3. Because your pagefile is fragmented, it essentially folds onto it's self like an obese man's flabby man tit hitting him in the face when he goes to lie down because it's requiring more overhead to deal with the pagefile and it doesn't get the faster access times/ transfer rates of the outer platter like you do with a contiguous pagefile.
4. Because your pagefile is fragmented, your files become fragmented and then it snowballs...

How much performance do you gain with less fragmentation? Provide some numbers, I'm curious (dguy6789 already refused to answer the question).

5. Because you can't argue that it's "hurting performance" and it could only be improving performance of the pagefile and the drive as a whole, the only feasable arguement for doing this is that it's "a waste of time".

Please provide benchmarks that this improves performance. And by improves, please show something over 5%.