pagefile benchmarks: What do you recommend?

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Maybe we can solve this once and for all. What benchmarks, that won't cost me more than a bit of time, can measure differences between pagefile settings?

Can we come up with a testing methodology? I'm willing to donate time and hardware to get some answers to this constant flamewar. :)

Hardware I'd test on:
Athlon XP 2400+ Mobile running at about 2ghz.
Anywhere between 256MB and 1.5GB of ram.
Crappy video card.
2x 200GB drives (either both seagate or 1 seagate and 1 WD PATA).
Windows XP sp2.
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
What "settings"? Like figuring out if a dedicated partition for a pagefile is "faster" than putting it on the same partition as your OS, fixed vs. variable size, etc?

I don't see the point. No matter how "fast" you make your pagefile, it's going to be hundreds or thousands of times slower than RAM. Nothing you can do will make performance improve in a noticeable fashion in that situation (other than putting the swapfile on a hard drive with much faster seek times, like 15KRPM SCSI, but even that will only be a limited impact). If you're hitting the swapfile in your system on a regular basis, you need more RAM.
 

nweaver

Diamond Member
Jan 21, 2001
6,813
1
0
I think that the benchmark would have to involve stability over a long term too. I always setup my machines with static page files, so it does not fragment...I've been flamed for it before, but when your page file is in 1000+ fragments, it can be messy (Yes, I have seen it there before).
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: Matthias99
What "settings"? Like figuring out if a dedicated partition for a pagefile is "faster" than putting it on the same partition as your OS, fixed vs. variable size, etc?

I don't see the point. No matter how "fast" you make your pagefile, it's going to be hundreds or thousands of times slower than RAM. Nothing you can do will make performance improve in a noticeable fashion in that situation (other than putting the swapfile on a hard drive with much faster seek times, like 15KRPM SCSI, but even that will only be a limited impact). If you're hitting the swapfile in your system on a regular basis, you need more RAM.

Correct, but not within the scope of this thread.
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: nweaver
I think that the benchmark would have to involve stability over a long term too. I always setup my machines with static page files, so it does not fragment...I've been flamed for it before, but when your page file is in 1000+ fragments, it can be messy (Yes, I have seen it there before).

No, I'm basing this purely on speed.
 

KoolDrew

Lifer
Jun 30, 2004
10,226
7
81
This will be pretty interesting. Is this going to test no pagefile vs. a pagefile etc.? I don't know about that benchmark though. Having no pagefile may on some benchmarks show an increase, but in the long run having no pagefile will hurt you.
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: KoolDrew
This will be pretty interesting. Is this going to test no pagefile vs. a pagefile etc.?

I think a pagefile is necessary in NT based systems.

I'll try to compare various pagefile settings. I'll try to do it with different amounts of ram, if I can find the time and stuff. But I don't know how to bench mark it. :p
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: KoolDrew
I think a pagefile is necessary in NT based systems.

I agree.

What I meant is, I thought you can't really disable it. :confused: :p

I don't know. So many people say tweaking it improves things, but no one is giving me a way to find out. :roll:
 

Smilin

Diamond Member
Mar 4, 2002
7,357
0
0
One game, One Office benchmark and something memory intensive like that benchmark from Solidworks.

I presume you are going to benchmark such things as:
Pagefile on different drive and/or partition
Tiny pagefile vs 1.5physical memory pagefile.
Pagefile of static vs dynamic size
Pagefile location

and my personal favorite:

"Let windows handle it" :p
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: n0cmonkey
Originally posted by: Matthias99
What "settings"? Like figuring out if a dedicated partition for a pagefile is "faster" than putting it on the same partition as your OS, fixed vs. variable size, etc?

I don't see the point. No matter how "fast" you make your pagefile, it's going to be hundreds or thousands of times slower than RAM. Nothing you can do will make performance improve in a noticeable fashion in that situation (other than putting the swapfile on a hard drive with much faster seek times, like 15KRPM SCSI, but even that will only be a limited impact). If you're hitting the swapfile in your system on a regular basis, you need more RAM.

Correct, but not within the scope of this thread.

Okay. You must have a lot more free time than me, then. :p

I would suggest trying to find an application that does number-crunching on very large datasets. Maybe a distributed computing app like Prime95? If you can tell it to use a very, very large amount of RAM, it will thrash your pagefile like crazy. A database application might also work. Photoshop would be a poor choice, since it actually maintains its own pagefile.
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: Smilin
One game, One Office benchmark and something memory intensive like that benchmark from Solidworks.

I presume you are going to benchmark such things as:
Pagefile on different drive and/or partition
Tiny pagefile vs 1.5physical memory pagefile.
Pagefile of static vs dynamic size
Pagefile location

and my personal favorite:

"Let windows handle it" :p

Thanks for the reply.

What I was thinking (aka. what would mean the least amount of work for my lazy butt) is:
  • Let windows handle it.
  • small static pagefile on the system drive
  • small static pagefile on a second drive
  • large static pagefile on the system drive
  • large static pagefile on a second drive
  • dynamic pagefile on system drive
  • dynamic pagefile on second drive

I would try this with 256MB, 512MB, and 1024MB of ram (PC3200 or PC2700).

I'm not sure if reboots in between would be enough to "clean it up," but I can figure that out later (when I get time).

If I happen to pick up a new hard drive, I'll reinstall Windows on it so I can use multiple partitions on the system drive.

That's just off the top of my head. I guess I'll start looking into benchmarks soon. :p
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: Matthias99
Okay. You must have a lot more free time than me, then. :p

Not generally. But I might have a bit of free time soon, and the new job might have slightly better hours. ;)

I would suggest trying to find an application that does number-crunching on very large datasets. Maybe a distributed computing app like Prime95? If you can tell it to use a very, very large amount of RAM, it will thrash your pagefile like crazy. A database application might also work. Photoshop would be a poor choice, since it actually maintains its own pagefile.

Photoshop would require me to spend money too. ;)
 

silverpig

Lifer
Jul 29, 2001
27,703
12
81
I'm no OS expert but would a test like this work?

Create a script to make a 3/4 gig sized file with a simple list of numbers. Make a copy so there are 2 copies sitting on your hard drive.
Write a program which loads the two lists, ads the values and then throws out the answers. Time how long it takes.

Would that work? You'd need 1.5 gigs of memory to do it would you not? Of course there's probably a more efficient way to program it so you aren't using 1.5 gigs of memory, but that's not what we're trying to do is it? :)
 

Phoenix86

Lifer
May 21, 2003
14,644
10
81
I have thought about this QUITE a bit and would love to help with some ideas. I'd like to raise some issues I discovered and see what your ideas on it are.

Real quick, here is a snip of a previous post on hardforum with some of the issues around testing the PF.

We still need a method to test paging. We all know paging happens, heck if a large page is done, you see it when your HDD lights up a lot more than normal. The problem is, it doens't happen on command. IE I can fire up a game and test average/min/max FPS over a given time and level, but you can't say 'OS, start paging' and test performance there. I have enabled a swap file on my work laptop (beginning to think that was a mistake) and it paged for a good 15 seconds last week alt-tabbing between apps. That was a good 15 seconds where I was doing exactly jack sh!t on this machine (low end PC, 550mhz, 512 RAM).

Now if I could make that happen on command, then test the same scenario with and without a PF, we would have our test.



About what PF settings to test.
1. System managed.
2. No PF
3. Min=max, where min=enough PF size to supply commit charge.
4. Min= enough to supply commit charge, Max=something real high, say 2x min?
5. ?other?

Scenarios to test. Consider that the wider the difference (+/-) between RAM and commit charge will impact PF usage.
1. "Plenty of RAM" More RAM than required to supply commit charge. This will represent what most of us aim for when building a system. EX, gaming setup where the game+OS+misc drivers ~850MB commit charge and the system has 1GB of RAM.
2. "Close but not enough" Not enough RAM to supply commit charge. This will represent a system that should be upgraded, but isn't starved. EX, gaming setup where the game+OS+misc drivers ~850MB commit charge and the system has 768MB of RAM.
3. "Not even close" System is starved for RAM. This will represent a system that is in dire need of a RAM upgrade. EX, gaming setup where the game+OS+misc drivers ~850MB commit charge and the system has 512MB of RAM. Maybe 256 RAM with less memory load? Though I'd think you would want to keep the software load the same through out the test.

Next, how do you test? Do you want to use generic system testers like in game applications? What about measuring PF usage through perfmon?

Here is a snip of one of KoolDrew's posts on how to measure it there.
Originally posted by: KoolDrew
1. Start > run > "perfmon"
2. Click "+" button
3. Under Performance Object choose Paging File and click add, then close.
4. Click the "% usage" for PF and read the max. This is the max % of PF used during that session. To find out the percentage move the decimal and multiply. I think all of you know how to find a percentage. This is the peak Pf usage during that session.

Gah, end of the day... I'll stop there for now. :D
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: Phoenix86
The problem is, it doens't happen on command.

That's a good point.

Hopefully this makes sense and is a valid thought (star wars in a matter of hours, valid thoughts are rare :D): We don't want to benchmark the system with various pagefile setups, not benchmark paging specifically.

The original issue:
Tweaking the pagefile can increase performance. A number of theories have been presented on how to tweak the pagefile.

The argument:
Tweaking the pagefile probably does not help, and most likely decreases performance.

Now, we know that if the user is paging heavily (other than unused crap being banished to the far reaches of the pagefile) that they need more ram. But we do not know if a static pagefile affects the computer's speed in any way, shape, or form.

So I guess if we use most of the ram, and try to run something that could utilize that ram we would have our answer. So a small amount of starting ram should be just fine, and adding more shouldn't change the outcomes much...

Ramble ramble.
 

Phoenix86

Lifer
May 21, 2003
14,644
10
81
Originally posted by: n0cmonkey
Originally posted by: Phoenix86
The problem is, it doens't happen on command.

That's a good point.

Hopefully this makes sense and is a valid thought (star wars in a matter of hours, valid thoughts are rare :D): We don't want to benchmark the system with various pagefile setups, not benchmark paging specifically.

The original issue:
Tweaking the pagefile can increase performance. A number of theories have been presented on how to tweak the pagefile.
That's why I presented the various tweaks to test. If your going to shoot down the theories you're going to need to test them...

The argument:
Tweaking the pagefile probably does not help, and most likely decreases performance.

Now, we know that if the user is paging heavily (other than unused crap being banished to the far reaches of the pagefile) that they need more ram. But we do not know if a static pagefile affects the computer's speed in any way, shape, or form.

So I guess if we use most of the ram, and try to run something that could utilize that ram we would have our answer. So a small amount of starting ram should be just fine, and adding more shouldn't change the outcomes much...

Ramble ramble.
I think you should test low and high amts of RAM to cover various real world scenarios. I'd agree that the systems with low RAM will show whatever effect better (since it'll page more), but some people want to tweak the PF when they have high amts of RAM.

An example is disabling the PF, obviously you can't do that unless you have enough RAM for commit charge, but does it help? From my experience with monitoring the PF on high RAM systems, you don't really ever write to the PF. The system will read some of it in small amounts. So there is some difference between having a PF an none, but again, which is better? I think on high RAM systems the difference on any tweak will not be much at all since, well, the system isn't really using the PF much. Having numbers to demonstrate this is different...
 

lansalot

Senior member
Jan 25, 2005
298
0
0
Put a piece of pagefile on each disk, for as many as you have. In theory, that should give the best performance (assuming each drive has the same transfer rate - skip slow drives obviously).