Originally posted by: ND40oz
You should have your page file size 1.5 times as large as the amount of system ram.
But the reason behind this advice is that you want to help the OS create a complete crash dump in case of a BSOD. If that isn't of any importance to you, then obviously you don't have to... (I take a different approach with my servers -- I'd like them to cough up a complete dump, but at home I'd rather only do this in case I get repeated BSODs and think the kernel dump isn't sufficient)
Originally posted by: RichUK
if you just defrag the whole system would that be sufficient
The pagefile is always in use. There is a window of opportunity while the system boots, but not all defraggers support that, and not everyone bothers booting all the time... (at work I only reboot when installing a security hotfix or add hardware -- could be as rare as every half year and even then would I be grumpy if I have to wait for a defragger to do its stuff)
As for seek time, as I said I have SCSI drives and plenty of memory. I don't expect it to page in very often and SCSI drives seek fast anyway... But certainly, this isn't always the situation, and I too would advocate a static pagefile. There's no reason to fragment if it can be avoided. I suspect though that if you let Windows manage the size, it will start at the size of physical memory, precisely for the reasons stated above (preserve a complete crash dump in case of BSOD).
Besides... If the system run out of virtual memory, then you'll get a "low on virtual memory" error message and you'll know you have to buy more memory, tweak pagefile settings or check if you run any apps that leak memory. IMO it is better to check these things properly rather than grow the pagefile automatically.
Someone said that disk space is cheap; Adding another drive adds more noise and heat. And SCSI drives aren't exactly what I'd call cheap. (but then I have to pay 25% VAT on top of whatever you guys pay...)