Page file management for best performance

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
except....

I couldn't understand why, but the other guy working, who had years and years of exchange experience looked at it and said "check your defrag, betcha you got a fragmented page file" and he was right. Defrag, reboot (previous reboots had not helped) and all was good.

I have no doubts that it's a common thing that Exchange people run into, but that doesn't make it any less of a corner case.

The idea is to contain the page file to its own dedicated partitions assumes that the page file is on that partition only and I'm pretty sure that was original suggestion, spanning the page file to multiple partition on the same disk would of course be completely redundant.

No, what I mean is that if you're memory is low enough that you're actively using the pagefile you're also going to be paging from binaries, shared libraries, data files, etc on other volumes too so putting the pagefile on it's own partition is just going to cause more seeking and make things worse.
 

kobymu

Senior member
Mar 21, 2005
576
0
0
No, what I mean is that if you're memory is low enough that you're actively using the pagefile you're also going to be paging from binaries, shared libraries, data files, etc on other volumes too so putting the pagefile on it's own partition is just going to cause more seeking and make things worse.
In case is that the page file is at the start of the partition fallowed by user data (lets say 30% of the disk is full), and the system is allowed to enlarge the page file, and a heavy usage pattern is detected by the system, the new chunk of the page file will new be at the end of the user data on the partition, as this (heavy) usage pattern continue the page file algorithm tells the os that all existing chunk of the page file is in need and the system will not remove them in the short period of time that the system is ideal and/or under only light load.

As time goes by the user add new application to his machine, filling 2/3 of his disk and keeps to maintain a heavy usage pattern when he reaches a new peak of memory requirement, the page file algorithm will try to guess what will be a best size for a new chunk of page file to be allocated.

Now we have three chunk of page file scattered on the disk: one at the beginning of the disk, one at middle of the user data and one chunk at the end of data. While the page file algorithm will try to estimate which chunk of code/date will best stored on the multiple fragments of the page file this if only a prediction, and at times, a simple move between one active windows to another will cause the system to execute a read from ALL the page file fragments, and at a non optimized sequence (middle chunk first, first chunk second, last chunk third, and then an additional read from the first chunk), causing a noticeable delay.

The technique described earlier would have prevented this specific delay.

Now remember that this is only a small idealized example, with only three large chunks of page file, in the real world I've seen far worse cases.

/edit

I'm not saying this optimization should be perceived as a 'best practice' or even a good 'tweak' but it does however have it merits, at least in some very specific scenarios.

(probably highly debatable but wth) And, you cannot take other unrelated read/write (user or system) operation into account when you design/deploy paging algorithms, or any other system service for that matter. In theory, system service operation should be design to work on their own terms (higher priority) and preferably in their own secluded background microcosms(memory and other resources like disks).
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
As time goes by the user add new application to his machine, filling 2/3 of his disk and keeps to maintain a heavy usage pattern when he reaches a new peak of memory requirement, the page file algorithm will try to guess what will be a best size for a new chunk of page file to be allocated.

A) Putting permanent additional memory load on a machine without actually increasing the physical memory on the machine is stupid.
B) I would guess (never actually looked) that the kernel will enlarge the pagefile by a set amount and won't do any 'guessing'.

Now we have three chunk of page file scattered on the disk: one at the beginning of the disk, one at middle of the user data and one chunk at the end of data.

In your hypothetical scenario yea, but in real life normal filesystem usage will come into effect so you can't even guess where the pagefile chunks will be. And the whole thing is moot anyway, IIRC Windows defaults to 1.5x physical memory for it's starting pagefile when left to system managed so the only time the pagefile is going to grow is when you're trying to process 50% more data than your machine can handle.

While the page file algorithm will try to estimate which chunk of code/date will best stored on the multiple fragments of the page file this if only a prediction, and at times, a simple move between one active windows to another will cause the system to execute a read from ALL the page file fragments, and at a non optimized sequence (middle chunk first, first chunk second, last chunk third, and then an additional read from the first chunk), causing a noticeable delay.

You know for a fact that the NT paging system has algorithms to determine which extent of a pagefile should be used for storage? I know it'll use the pagefile on the least currently active drive, but I've never heard of the estimation that you're talking about. Oh and the move between active windows that caused pagefile data to be read has most likely also caused other data to be paged in from other files as well so you have to take that into account as well.

The technique described earlier would have prevented this specific delay.

Doubtful since the pagefile is only like 1/5 of the paging activity on a system. I would bet money that if you were able to setup a scenario to actually time this and compare those times to that of a set with an 'optimized' pagefile the difference would be well within the margin of error of all of the readings.
 

kobymu

Senior member
Mar 21, 2005
576
0
0
A) Putting permanent additional memory load on a machine without actually increasing the physical memory on the machine is stupid.
B) I would guess (never actually looked) that the kernel will enlarge the pagefile by a set amount and won't do any 'guessing'.
A) User intelligence is irrelevant to the system, no matter how much more knowledgeable user wishes that the system would be tweaked to their own needs, OS designers will need to consider less technology incline much more then power user (almost by definition).
B) Doubtful, there isnt a one size fit them all, and even so, in your guess how many chunks of this set amount will the system allocate contiguously?

In your hypothetical scenario yea, but in real life normal filesystem usage will come into effect so you can't even guess where the pagefile chunks will be.
Or in other words in real life the page file will be much more fragmented causing and even more erratic seeks pattern.

And the whole thing is moot anyway, IIRC Windows defaults to 1.5x physical memory for it's starting pagefile when left to system managed so the only time the pagefile is going to grow is when you're trying to process 50% more data than your machine can handle.
And we all know that XP machines with only 256MB are very rare :).

You know for a fact that the NT paging system has algorithms to determine which extent of a pagefile should be used for storage? I know it'll use the pagefile on the least currently active drive, but I've never heard of the estimation that you're talking about.
No but that part was an even if, however dont forget that some of the code out there is carrying 10 years if not more of optimizations on it, you would be surprised just how much lines of code are like that.

Oh and the move between active windows that caused pagefile data to be read has most likely also caused other data to be paged in from other files as well so you have to take that into account as well.
not necessarily, in some scenarios that other file code/data was already upload from the file into memory and later on was paged to the disk and now the system only needs the data that is paged.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
A) User intelligence is irrelevant to the system, no matter how much more knowledgeable user wishes that the system would be tweaked to their own needs, OS designers will need to consider less technology incline much more then power user (almost by definition).
B) Doubtful, there isnt a one size fit them all, and even so, in your guess how many chunks of this set amount will the system allocate contiguously?

My point is that there's no way that an algorithm in the pagefile selection code can make the paging seem fast, either the hardware is fast or it's not and once you get into a paging storm you're screwed no matter what. And there's no way to guess how many chunks will be allocated contiguously because everyone's filesystem will be layed out differently.

Or in other words in real life the page file will be much more fragmented causing and even more erratic seeks pattern.

Or the opposite could be true and the pagefile could be laid out contiguously out of pure luck. You can't make an educated guess in either direction without knowing about the user's current filesystem layout and knowing what they're doing to do with the data in that filesystem.

And we all know that XP machines with only 256MB are very rare

IME they are, but maybe the people that I know are just smarter than that. And I would guess that the majority of the people with those low memory XP machines aren't trying to do much more than browse the Internet and read email on them.

No but that part was an even if, however dont forget that some of the code out there is carrying 10 years if not more of optimizations on it, you would be surprised just how much lines of code are like that.

No, I wouldn't be surprised but how do you know some of it's 10 years old? MS could have replaced it in Win2K or XP without anyone noticing since they're the only users of that code. Unless you work for MS or one of the companies that has licensed the source code you can't know either way.

not necessarily, in some scenarios that other file code/data was already upload from the file into memory and later on was paged to the disk and now the system only needs the data that is paged.

Maybe, but you can't make that determination either way unless you happen to be looking at the machine currently doing all of the paging. But the chances are good that if you're doing a lot of pagefile I/O you're also paging from other places as well because will have been looking for memory that hasn't been touched in a while and that includes binaries, shared libraries, etc.
 

kobymu

Senior member
Mar 21, 2005
576
0
0
My point is that there's no way that an algorithm in the pagefile selection code can make the paging seem fast, either the hardware is fast or it's not and once you get into a paging storm you're screwed no matter what. And there's no way to guess how many chunks will be allocated contiguously because everyone's filesystem will be layed out differently.

And my point was that even an extremely well engineered page file algorithm can't account for all these factor so some tweaks from the user side could prove beneficial, and while best/worst case scenario are important, you forget that there is a whole middle ground between them, which is exactly where the OS can do some 'magic', especially in the 256MB XP example.

Or the opposite could be true and the pagefile could be laid out contiguously out of pure luck. You can't make an educated guess in either direction without knowing about the user's current filesystem layout and knowing what they're doing to do with the data in that filesystem.
Which is exactly why a knowledgeable user should, at the very least, consider tweaks like these, instead of relaying on pure luck.

IME they are, but maybe the people that I know are just smarter than that. And I would guess that the majority of the people with those low memory XP machines aren't trying to do much more than browse the Internet and read email on them.
Now, this is exactly why this example is so important, a 256MB memory XP machines doesn't have enough memory to browse the internet and read emails, not with all the system services and user applications in the background (instant messaging, antivirus apps and what not), so in comes the paging mechanism and does it bit, and all these not so technical savvy users think that they have enough memory because their system seem to work fairly well.

Situation like these are exactly what the page file algorithm is rigged for, to make the best out not so good situations like these, because when the system is out of resources, it's out of resources and there is nothing it can do. A lot of system services are rigged to be experts in the art of compromise more then anything else, and in case where a knowledgeable user that wants to squeeze out every ounce of performance of his system (especially when he has a powerful machine), then some of the compromising that the OS does on default should be override.

No but that part was an even if, however dont forget that some of the code out there is carrying 10 years if not more of optimizations on it, you would be surprised just how much lines of code are like that.

No, I wouldn't be surprised but how do you know some of it's 10 years old? MS could have replaced it in Win2K or XP without anyone noticing since they're the only users of that code. Unless you work for MS or one of the companies that has licensed the source code you can't know either way.
That part was a general statement, more then anything else.

Maybe, but you can't make that determination either way unless you happen to be looking at the machine currently doing all of the paging. But the chances are good that if you're doing a lot of pagefile I/O you're also paging from other places as well because will have been looking for memory that hasn't been touched in a while and that includes binaries, shared libraries, etc.
The idea is that situation like these can happen, not that they will happen most of the time, and as long as said optimization don?t make the instances when this scenario doesn?t hold true any worse, it?s a valid optimization.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
And my point was that even an extremely well engineered page file algorithm can't account for all these factor so some tweaks from the user side could prove beneficial, and while best/worst case scenario are important, you forget that there is a whole middle ground between them, which is exactly where the OS can do some 'magic', especially in the 256MB XP example.

But it's not magic and to get any appreciable benefits you have to do something extreme like get a dedicated drive for your pagefile and you might as well just spend that money on more memory.

Which is exactly why a knowledgeable user should, at the very least, consider tweaks like these, instead of relaying on pure luck.

But I still stand by my statement that even with these 'tweaks' you'll never get any real, measurable improvements except in extreme circumstances and no one has been able to provide any numbers to the contrary.

Now, this is exactly why this example is so important, a 256MB memory XP machines doesn't have enough memory to browse the internet and read emails, not with all the system services and user applications in the background (instant messaging, antivirus apps and what not),

Sure it does, Win2K (sorry, no XP machines around) itself boots up with ~90M used so round up to 100M for XP. That leaves 150M for IE, OE, an IM client and an antivirus which should be fine. Obviously a pagefile will help so that unused services can be paged out and make things slightly better, but whether it's contiguous or not is irrelevant.

That part was a general statement, more then anything else.

Yes and one with no factual basis. While I'm sure MS has been refining their pagefile algorithms since NT 3.1 there's no way to know how old the code really is.

The idea is that situation like these can happen, not that they will happen most of the time, and as long as said optimization don?t make the instances when this scenario doesn?t hold true any worse, it?s a valid optimization.

No, the idea should be to optimize for the common case since that's the one that will have the most affect on your work. Time spent trying to fix corner cases that may never happen is pointless. And setting a fixed size for he pagefile may make things worse since things may crash when the pagefile gets full.
 

kobymu

Senior member
Mar 21, 2005
576
0
0
But it's not magic and to get any appreciable benefits you have to do something extreme like get a dedicated drive for your pagefile and you might as well just spend that money on more memory.
Its 'magic' when it makes some people I know think they have enough memory for their system, when in fact, they don't.

Sure it does, Win2K (sorry, no XP machines around) itself boots up with ~90M used so round up to 100M for XP. That leaves 150M for IE, OE, an IM client and an antivirus which should be fine. Obviously a pagefile will help so that unused services can be paged out and make things slightly better, but whether it's contiguous or not is irrelevant.
:) Ok, if you say 2k instead of XP, I say Firefox instead of IE.

Yes and one with no factual basis.
Objection!

No, the idea should be to optimize for the common case...
So if you have a system that its work load isn?t common will you revert all the common case optimization that the OS deploy on default?

And setting a fixed size for he pagefile may make things worse since things may crash when the pagefile gets full.
Setting a fixed size for pagefile partition to a size under the maximum size that the page file can grow to is a bad idea, and anyone that does that should have his system crash on him.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Its 'magic' when it makes some people I know think they have enough memory for their system, when in fact, they don't.

But that's orthogonal to the thread, we're talking about optimizing the pagefile not whether or not to have one at all.

Ok, if you say 2k instead of XP, I say Firefox instead of IE.

The difference between the two is minimal, I just opened 3 tabs in FF and the same 3 sites in IE6 and the memory usage is only ~5M difference. And given the that most of the non-technical peope that I know only browse a site or two at a time I doubt it really matters which they choose.

So if you have a system that its work load isn?t common will you revert all the common case optimization that the OS deploy on default?

That's a red herring, you can't group "all the common case optimizations" into one like that because some may be relevant and some not.

Setting a fixed size for pagefile partition to a size under the maximum size that the page file can grow to is a bad idea, and anyone that does that should have his system crash on him

You can't create a pagefile larger than the filesystem you're trying to put it in unless you edit the registry manually and if you go to those lengths, yes you get what you deserve. But I just tried that and Windows didn't crash, it just created a pagefile large enough to fill the filesystem.
 

HannibalX

Diamond Member
May 12, 2000
9,359
2
0
Originally posted by: tempoct
Which one has best real world performance. Says all on a single RAID 0 volume.
1. OS and page file on same partition
2. Page file is in different partition than OS.
3. Page file is in different and dedicated partition (sized just little bigger than needed size)

General rules (AFAIK) is page file should be set with static size (for the recommended size, which usually 1.5 x Physical Memory size).

Wonder which one has best performance in real world. If the performance doesn't differ much, choice 1 is the best for simplicity.

What do you think? This is for XP 32 and Vista 64. OS shouldn't be much different though.

With 3GB of ram I just turn it off.
 

tempoct

Senior member
May 1, 2006
246
0
0
Originally posted by: Trinitron
Originally posted by: tempoct
Which one has best real world performance. Says all on a single RAID 0 volume.
1. OS and page file on same partition
2. Page file is in different partition than OS.
3. Page file is in different and dedicated partition (sized just little bigger than needed size)

General rules (AFAIK) is page file should be set with static size (for the recommended size, which usually 1.5 x Physical Memory size).

Wonder which one has best performance in real world. If the performance doesn't differ much, choice 1 is the best for simplicity.

What do you think? This is for XP 32 and Vista 64. OS shouldn't be much different though.

With 3GB of ram I just turn it off.

I only have 2GB.
I'm tracking Page File usage on another XP machine (also 2GB memory) it is used around 1.2-1.5GB most of the time. I have quite a lot of things open like Outlook, Visual Studio, SQL mgmt studio, etc. When I close some programs, it drop to around 1GB.
 

kobymu

Senior member
Mar 21, 2005
576
0
0
..whether or not to have one at all.
That's orthogonal to the thread because that wasn?t my point.

That's a red herring, you can't group "all the common case optimizations" into one like that...
Nice, That interpretation of what I said would be equal to me saying: you shouldn't change the default display resolution that windows defaults to, they are both equally stupid (hint: Change 'all' to 'some').

/edit

And just for the record, can you present any proof that leaving the page file to its default settings will give you the best performances?
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
That's orthogonal to the thread because that wasn?t my point.

Well if that wasn't your point then you didn't word it very well because it is what you said.

Nice, That interpretation of what I said would be equal to me saying: you shouldn't change the default display resolution that windows defaults to, they are both equally stupid (hint: Change 'all' to 'some').

But once again it is what you said, word for word you said "all the common case optimization that the OS deploy on default?"

And just for the record, can you present any proof that leaving the page file to its default settings will give you the best performances?

I never said it would give the best performance, I said it's not worth it to change it.
 

kobymu

Senior member
Mar 21, 2005
576
0
0
That's orthogonal to the thread because that wasn?t my point.

Well if that wasn't your point then you didn't word it very well because it is what you said.

Nice, That interpretation of what I said would be equal to me saying: you shouldn't change the default display resolution that windows defaults to, they are both equally stupid (hint: Change 'all' to 'some').

But once again it is what you said, word for word you said "all the common case optimization that the OS deploy on default?"
You know what, I don?t feel like arguing semantic. You win by a technicality (read: my bad wording).

I never said it would give the best performance, I said it's not worth it to change it.
Not worth to you maybe, but not everyone is like you, some people like to explore with stuff like that (although I would be surprised if you don?t like doing that too), and since you can't guarantee that it's "not worth it", you shouldn?t be saying that.

/edit

This reminds me of a quote (and tnx for google, its originator):

"The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man. "
George Bernard Shaw
 

bsobel

Moderator Emeritus<br>Elite Member
Dec 9, 2001
13,346
0
0
And just for the record, can you present any proof that leaving the page file to its default settings will give you the best performances?

Best settings for you at any given instant? No. Best settings overall for most if not all systems on average, Yes. Based on LOTS of data collected by the VM team. They just don't 'guess' at what might work here, they do everything possible to make ther peformance best for all systems.

Your taking Nothinman for saying it doesn't matter, but you've yet to show a config change that provides any measureable improvement. Until you do, the burden of proof is on you.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
You know what, I don?t feel like arguing semantic. You win by a technicality (read: my bad wording).

If you can't say what you actually mean how are you supposed to have a decent conversation? Although I am impressed that you didn't spell it symantecs like most people I've seen. =)

Not worth to you maybe, but not everyone is like you, some people like to explore with stuff like that (although I would be surprised if you don?t like doing that too), and since you can't guarantee that it's "not worth it", you shouldn?t be saying that.

Worth is expressed as a quantity of value, if the value that you get from moving the pagefile is virtually 0 and the time spent moving it is greater than 0 how can it be worth it? You're the one advocating the 'optimization' so the burden of proof is on you. It's along the same lines as saying that it'll make your machine run faster if you install Windows onto drive C: instead of D:.

And yes I am a person who enjoys exploring technology and part of that is debunking myths like this, everyone just assumes that optimizing your pagefile, defragmenting your filesystems, cleaning your registry, etc will give nice boots in performance but no one can actually prove it. And it's generally not true. During XP's installation procedure MS flashes up some propaganda talking up XP (not sure why they don't advertise other produce instead, but hey) that says things like "XP makes the Internet faster" (or maybe it says better, I can't remember exactly) do you believe those as well?
 

bendixG15

Diamond Member
Mar 9, 2001
3,483
0
0
Nothinman ..... we sure are lucky that you hang here.....

I still say .. A little knowledge is a dangerous thing

Someday, you may agree with that....

Good Luck in your quest
 

kobymu

Senior member
Mar 21, 2005
576
0
0
Best settings for you at any given instant? No. Best settings overall for most if not all systems on average, Yes. Based on LOTS of data collected by the VM team. They just don't 'guess' at what might work here, they do everything possible to make ther peformance best for all systems.

All I was saying that as long as people don?t tinker with stuff they can't make it better, at least until it is debunked properly, like in this case (didn?t know about the bolded part :eek: ). I stand corrected. (p.s. a link would be nice).

Koby.
 

bsobel

Moderator Emeritus<br>Elite Member
Dec 9, 2001
13,346
0
0
All I was saying that as long as people don?t tinker with stuff they can't make it better, at least until it is debunked properly, like in this case (didn?t know about the bolded part :eek: ). I stand corrected. (p.s. a link would be nice).Koby.

Koby, I don't have a link to that, you'll have to 'trust me'. Which is a weird request from a stranger on an internet forum, but the regular posters here know me as a chief software architect for Symantec and we work very closely with MS (especially the Windows team).
 

kobymu

Senior member
Mar 21, 2005
576
0
0
Koby, I don't have a link to that, you'll have to 'trust me'. Which is a weird request from a stranger on an internet forum, but the regular posters here know me as a chief software architect for Symantec and we work very closely with MS (especially the Windows team).

I will consider making an exception then.

And btw I completely accept that the burden of proof is on my side, even when i'm just trying to play the devils advocate.
 

tempoct

Senior member
May 1, 2006
246
0
0
OK, so what's the conclusion? If I don't have extra controller and extra dedicated HDD for page file, I should just leave it alone and have Windows manage everything?