• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Dedicated HD for page file?

kdubster

Junior Member
I'm building a new computer that will include the following hard drives:
(4) 15k rpm Seagate 37GB SCSI320 drives (RAID-0 via Adaptec SCSI320 controller)
(2) 7.2k rpm IBM 160GB ATA133 drives (RAID-0 via 3Ware 8-channel IDE controller)
(4) 7.2k rpm Seagate 200GB ATA133 drives (RAID-0 via 3Ware 8-channel IDE controller)

I am planning on installing my OS and programs on the SCSI partition, also using the extra space for my "current project" (whatever that may be - very intensive A/V work). The IBM drives are for my music library, and the IDE Seagates are for my very large project/file archive.

My question is should I invest the extra $150 to purchase a Seagate 18GB 15k rpm SCSI320 hard drives and use it as a drive dedicated to the Windows XP (64bit edition) page file? It would still go through the same controller card, though I wonder - would a 4-drive array go faster doing page file & video data than just the video, then a second drive doing the page file? Wouldn't the page file go much faster if the 4-drive array wasn't maxed out?

I do, however, have 4GB of DDR400 ram in the computer (still sucks ass though when doing HD video projects), so unless I'm going hardcore on a project Windows XP shouldn't be paging a thing. Also, any known issues with setting a huge (17GB) page file? Would be nice to have windows use it as the system cache while it dedicates the RAM to my video applications.

I'd also like to mention that the SCSI320 controller is 64bit 133mhz connected to a PCI-X slot on a dual opteron motherboard (two 2.4ghz 😛) - so with the 1GB/sec, 5 SCSI drives shouldn't max the PCI bus out.

Any thoughts or ideas would be useful, thanks.
 
You may actually have one of the rare cases here where putting your pagefile on a separate physical drive would actually make a performance difference. If you're loading VERY large files into memory from the same disk that your PF is on, it's going to drastically cut into your transfer rates if Windows is also trying to move data from RAM to the PF simultaneously. However, I'm not sure that a dedicated 15kRPM SCSI disk is really necessary -- you'd be better off, I think, with another PATA RAID0 (or just putting your PF on one of the RAID0s you already have listed), since it would offer far better transfer rates than *any* single drive, and the controller wouldn't be a bottleneck.

That said, in an ideal world you wouldn't be hitting the pagefile at all, and I would suggest adding more RAM first. However, if your files wouldn't fit in 8GB of RAM, this might not help, either, and would cost quite a bit if you only have 4 RAM slots, since you'd have to buy 4 2GB DIMMs. I'm also not sure what (if any) limit there is to application RAM mapping in 64-bit Windows; in 32-bit Windows, no single application can access more than 2GB of RAM, even if there is 4GB in the system. I don't know if there is a similar limitation in the 64-bit version.

I have no idea about setting 'huge' pagefile sizes, either, as I haven't played around with the 64-bit version of Windows yet.
 
I would build the rig first and see how well it meets your needs. Even with an extremely high end system like you have, $150 is a little more than a drop in the bucket, and personally I would hold off to see if the currrent setup is good enough and if not, that there is some credible evidence that suggests investing in another drive will be worth it.
 
I do it on servers not on workstations..
I have 1 main boot drive (Raid 1) and 5 other drives which I put a small chunk in each.. I dont think there is much of a difference..
 
I thought that's part of the basic rules of optimization:

1. always put pagefile.sys on different physical hard drive away from OS/apps

2. tweak OS to minimize page file use even if it has more than enough allocated

begs the question, why buy another $150 hd when you can put your pagefile in one of your other existing drives?
 
With 4GB RAM, you should just disable your pagefile. There are a few programs that don't like running without a pagefile, but with that kind of RAM a pagefile will just slow you down.

As Matthias99 was saying, though, there is the 2GB RAM issue. I haven't seen any difinitive answer about 4GB actually helping much. The sad thing is that Windows is stupid enough that you might be better off setting up some kind of a ramdisk that uses 2GB of your RAM for a swapfile (not that I've tried anything like that since Win95, where it's easy).
 
Been doing that for over 7 years now - not a separate drive, but a separate partition on the main O/S drive. That isolates the page file and prevents it from constantly causing fragmentation.
 
Originally posted by: corky-g
Been doing that for over 7 years now - not a separate drive, but a separate partition on the main O/S drive. That isolates the page file and prevents it from constantly causing fragmentation.

You know, you should always use a static swapfile because it's faster. If you use a static swalfile, it will never get fragmented. No need for a separate partition.
 
Originally posted by: Tostada
Originally posted by: corky-g
Been doing that for over 7 years now - not a separate drive, but a separate partition on the main O/S drive. That isolates the page file and prevents it from constantly causing fragmentation.

You know, you should always use a static swapfile because it's faster. If you use a static swalfile, it will never get fragmented. No need for a separate partition.
Both are wrong. A separate partition for the pagefile on the same drive as the OS results in poorer peformance because the drive heads have farther to move then if the pagefile was on the same OS partition. A static pagefile isn't necessary since pagefiles don't fragment over time. If the OS has to automatically resize the pagefile, only the additional portion of the pagefile can be fragmented and that portion is deleted anyways the next time you reboot. The optimal method is to set the initial pagefile size to a value that is adequate for all your needs, but allow it to grow in the off-chance you use an application or file that far exceeds your normal usage.
 
Originally posted by: Accord99
Both are wrong. A separate partition for the pagefile on the same drive as the OS results in poorer peformance because the drive heads have farther to move then if the pagefile was on the same OS partition. A static pagefile isn't necessary since pagefiles don't fragment over time. If the OS has to automatically resize the pagefile, only the additional portion of the pagefile can be fragmented and that portion is deleted anyways the next time you reboot. The optimal method is to set the initial pagefile size to a value that is adequate for all your needs, but allow it to grow in the off-chance you use an application or file that far exceeds your normal usage.

Go do some reading before you misinform people.

You should always use a static swapfile. It is faster. If for some reason you don't use a static swapfile, you would need to put the swapfile on a separate partition to keep it from becoming fragmented.

A dynamic swapfile will (as its name implies) resize itself, which slows things down, but it also allows the OS to write other files in sectors that used to contain the swapfile. The next time the swapfile grows, it will have to fragment.

Dynamic swapfiles always fragment if they're not on a separate partition. If you're using a dynamic swapfile, try running defrag and see just how many pieces your big your "Unmovable files" area is in.

EDIT: Your post actually seems like you understand this to some degree, but I think you're giving Windows too much credit. In my experience, it's not nearly smart enough to resize the swapfile appropriately as you describe.
 
The best way, by a looooong way, to speed up your pages is to add RAM.

The second best way is to put it on the most used partition of the drive that is accessed the least.


Otherwise you want the PF to be as close to the OS/apps as possible. That means on the same partition.

Limiting the size of the PF provides no benefit and has the possibility to crash your sytem should it need to go beyond whatever limnit you're set.

PF is dealt with in 64Kb crumbs. So seek is most important here. This is also why the concept of a fragmented PF is silly. It's not read continously.

Page fauls are not just resolved with the PF.

Let XP or 2k manage your PF. It will do a better job than some "tweaker" myth.

---

To the OP. If you're really that starved for speed I suggest you reconsider throwing hardware at the problem and ALSO look for gains in software. What SW are you running? And what case are you gonna jam those 11 drives into?
 
We have a workstation (running some specialized medical 3D modeling software from Vital Images), and on that machine the pagefile is on a separate drive, formatted to FAT32. One of the system engineers told me (when I asked about the FAT32 business) that its supposed to be faster for a pagefile than NTFS - no extra overhead like permissions/security, etc. on FAT32.

Don't know if he's right or not, but that's what he said.
 
Some of the applications I am running do have 64-bit versions avaliable (removing the 2gb limitation of a single application (note: windows only limits the total RAM avaliable to the computer, not to a single application - it's the fact that the application is 32bit that limits it) - however, my issue is that I need *more* than my 4GB of memory. Working with HD video (not watching - but editing), I can keep about 15 seconds in memory to watch at realtime (RAM Preview feature) - thats why I am putting 4GB of RAM in the system. The motherboard contains 8 DDR400 ram slots (4 per cpu - though each cpu can access both sets of RAM). The motherboard also supports 2GB chips for a total of 16GB of RAM.

I don't want a page file on my IDE drives at all - they are in very special configuration (which would take too long to explain how/why) which would be unsuitable for a page file.

As you noted, considering how expensive this computer is, an extra $150-350 isn't too much to spend to increase performance.

Disabling your page file (for me at least) is a VERY bad idea, doing tests myself most of my applications fail - they are highend video applications designed to take advantage of virtual memory.

Considering that the bulk of the RAM is being used for HD video RAM Previews, the only data that would spill over to the page file would be the system cache and any background applications, leaving (mainly) my editing application in memory - so I'm not too concerned about a paging file, though if it'll give me even a 1% performance difference it'll be worth the additional money.

Thanks for your input so far 🙂
 
Originally posted by: Accord99
That's how Windows works. After a resize, after the next reboot the additional space that was allocated to the pagefile is deleted and the pagefile returns to the original size. The original space has not been fragmented in anyway.

http://www.tweakhound.com/xp/virtualmemory.htm

I certainly don't believe everything in that link. Besides the horrible English, the author speaks very authoritatively about things which he doesn't seem to fully understand. It's more a collection of technical terms which he is throwing around. Specifically, his explanation of why fragmented page files is a myth is completely flawed. If your pagefile is fragmented, there is a chance that rebooting will cause it to be non-fragmented when it shrinks, but what good is that? The pagefile had to grow before (becoming fragmented, slowing things down, using a slower part of the drive), and it will no doubt have to grow again. You'd be better off tracking this behavior and creating a static non-fragmented swapfile of the size you actually need.

I can't believe multiple people are saying not to limit the size of the swapfile. I thought this was common knowledge. I've seen the defrag analysis more times than I can count. When Windows manages the pagefile, it fragments.

I've setup countless machines with 512MB RAM and a static 512MB swapfile, and the only time I've seen this become an issue is when software has some kind of memory leak. For example in Jedi Academy, after you save/load, the game's memory usage goes up about 40MB each time, so after about 20 times your pagefile is swapping constantly and the game exits with an error message about not being able to allocate memory. This is a legitimate software problem. If you let Windows manage the swapfile, the program will still crash. The only difference is that your whole system will grind to a near standstill before that happens. Maybe there's a legitimate reason for someone with 512MB+ to let Windows make a bigger swapfile than there is RAM, but I've never run into it. OTOH, I have seen definite improvements when you force a 256MB system to use a 384MB swapfile, or force a 128MB system to use a 256MB swapfile. If you let Windows do its own thing it'll start off with a smaller swapfile and always resize it, always fragment it, and always spend more time churning the drive.

Maybe I'm completely wrong, even though I've never run into problems with static swapfiles. If Windows actually resizes things intelligently, then I guess you might as well let it do what it wants. The problem I see with that is that most systems want to have the swapfile and the OS both at the beginning of the fastest drive. When you setup a machine you want a defragmented swapfile close to the beginning of this drive. So, you setup windows, do all your XP optimizing, then disable virtual memory, defrag your C: drive, re-enable virtual memory, and make sure your swapfile is in one big chunk. I have seen this done on systems with Windows-managed swapfiles just to come back later and see the swapfile fragmented. Even if you defrag the hard drive, it's not going to fix the swapfile. You just have to disable it or move it to another drive then reboot before you defrag.

In my experience, Windows is too stupid to be trusted. I don't trust it to manage my swapfile anymore than I trust it to know which services I want taking up memory and providing back doors for worms.
 
I've never had to set my page file larger than 256 MB for Windows. However, if you do set aside a dedicated partition for your swap file (make it the fastest one on the drive or array you choose to use), you could just set a minimum size (say 512 MB) and leave it dynamic for the top limit.

..bh.

It's Sunday, have some :wine: !
 
Originally posted by: addragyn
The best way, by a looooong way, to speed up your pages is to add RAM.

Correct.

The second best way is to put it on the most used partition of the drive that is accessed the least.
Otherwise you want the PF to be as close to the OS/apps as possible. That means on the same partition.

You want the pagefile on a different drive. Failing that, it doesn't matter much where you put it - you're still running on the same spindle, so that will hurt performance.
 
Originally posted by: Tostada
With 4GB RAM, you should just disable your pagefile. There are a few programs that don't like running without a pagefile, but with that kind of RAM a pagefile will just slow you down.

As Matthias99 was saying, though, there is the 2GB RAM issue. I haven't seen any difinitive answer about 4GB actually helping much. The sad thing is that Windows is stupid enough that you might be better off setting up some kind of a ramdisk that uses 2GB of your RAM for a swapfile (not that I've tried anything like that since Win95, where it's easy).

You may wish to read up on PAE. Windows Server versions can use far more than just 2GB of RAM / 4GB of RAM in the computer. You'll pay for it, but 32GB+ RAM implementations are certainly out there and available.

/3GB may also help in this scenario too, but some apps don't like that. Test a bit.
 
Back
Top