• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

1gig ram, 768meg Virtual memory ok?

JEDI

Lifer
i just ghosted my image to a new machine. the only machine had 512megs ram, and 768meg virtual memory.

new machine has 1gig. any problems leaving virtual memory at 768meg?
 
Originally posted by: JEDI
i just ghosted my image to a new machine. the only machine had 512megs ram, and 768meg virtual memory.

new machine has 1gig. any problems leaving virtual memory at 768meg?

I'm assuming you're referring to pagefile size, and your best bet (as has been discussed many times on this forum) is to leave it system managed.
 
Originally posted by: n0cmonkey
Let Windows manage it. One less headache.

but if windows continuously increases/decreases the pagefile size, wont the pagefile be fragmented causing a performance hit?

ie: part of the file is in the beginning of the hard drive, rest of the file is located at the end of the hard drive?
 
Originally posted by: JEDI
Originally posted by: n0cmonkey
Let Windows manage it. One less headache.

but if windows continuously increases/decreases the pagefile size, wont the pagefile be fragmented causing a performance hit?

ie: part of the file is in the beginning of the hard drive, rest of the file is located at the end of the hard drive?

I doubt it would affect the experience in a noticable way. With 1GB of ram you shouldn't be hitting the pagefile often.
 
Let Windows manage it. One less headache.

99% of the time this is the best thing to do.

but if windows continuously increases/decreases the pagefile size, wont the pagefile be fragmented causing a performance hit?

Pagefile fragmentation doesn't effect performance outside of some extreme cases. If you are really worried about fragmentation though just make sure your intial size is high enough to the point where it won't need to be resized.
 
Originally posted by: MrChad
Originally posted by: JEDI
how do i set initial size while letting the system manage size?

You don't.

Just let the system manage it. Really. 🙂

so why shouldnt i worry about pagefile fragmentation? KoolDrew's reply didnt answer that question.
 
Originally posted by: JEDI
Originally posted by: MrChad
Originally posted by: JEDI
how do i set initial size while letting the system manage size?

You don't.

Just let the system manage it. Really. 🙂

so why shouldnt i worry about pagefile fragmentation? KoolDrew's reply didnt answer that question.

With 1GB of ram you shouldn't be frequently paging large amounts of useful data.
Fragmentation of small bits of data doesn't kill paging speed as much as you think.
 
so why shouldnt i worry about pagefile fragmentation? KoolDrew's reply didnt answer that question.

Because the page file isn't read sequentally it's random access, so having it conigous isn't that important.

 
Because the MS devs who write that stuff know a whole lot more about it than any of us, and thus they probably have it coded in the best possible way.
 
so why shouldnt i worry about pagefile fragmentation?

Windows never reads or writes more then 64KB to the pagefile and windows almost never will read or write to the pagefile in sequential 64KB chunks. So, regardless if the pagefile is fragmented or not, the heads will be moving all over the place anyway.
 
Originally posted by: JEDI
i just ghosted my image to a new machine. the only machine had 512megs ram, and 768meg virtual memory.

new machine has 1gig. any problems leaving virtual memory at 768meg?

I've set it to 1500~1500 is that safe?..
 
I've set it to 1500~1500 is that safe?..

Theres no reason to set a static pagefile. Either leave it system managed or set a custom size where the initial is around 4x your actual PF usage. The max should be at least 2x what you would set the intitial as.
 
Here is a guide to pagefiel optimization.

Link

Caveat; pagefile optimization is one of those topics where there a large number of differing opinions, and no clear guidance as to the right answers.
 
Let Windows handle it. This has been covered in many threads many times, and every single one ends with the general concensus to let Windows handle it.
 
Originally posted by: KoolDrew
so why shouldnt i worry about pagefile fragmentation?

Windows never reads or writes more then 64KB to the pagefile and windows almost never will read or write to the pagefile in sequential 64KB chunks. So, regardless if the pagefile is fragmented or not, the heads will be moving all over the place anyway.

This is somewhat correct and somewhat not. The main reason why you would want to keep the page file defragmented is because it allows the OS to catpure a dumpfile without significant errors. This is guidance historically given for server usage, specifically 03, but I don't see why it wouldn't apply to XP as well.
 
Originally posted by: KoolDrew
I've set it to 1500~1500 is that safe?..

Theres no reason to set a static pagefile. Either leave it system managed or set a custom size where the initial is around 4x your actual PF usage. The max should be at least 2x what you would set the intitial as.

Actually yes there is. A static pagefile creates the sizing upfront, instead of setting it too low and forcing the OS to grow it, creating more fragmentation. See one of my above quotes for the main reason why you want to try and avoid fragmentation. Guidance isn't really too clear on this topic, btw. It's more of a "theory" and best guess approach.
 
Originally posted by: dguy6789
Let Windows handle it. This has been covered in many threads many times, and every single one ends with the general concensus to let Windows handle it.

Even Microsoft doesn't recommend allowing Windows to handle it if you want it optimized. General consensus is not always correct.
 
but if windows continuously increases/decreases the pagefile size, wont the pagefile be fragmented causing a performance hit?

Windows never 'continuously increases/decreases the pagefile size'. If necessary it will grow, but it won't shrink until you reboot. And really, if you're using the pagefile enough to cause it to grow, I doubt the time required to extend it will be noticable in between all of the other disk I/O going on.

Caveat; pagefile optimization is one of those topics where there a large number of differing opinions, and no clear guidance as to the right answers.

Because there's no accurate way to gauge the performance affect. But the main thing to think about is that, if you have enough memory in your machine you will never use the pagefile so optimizing it is pointless. And on top of that, if you're low on memory and doing a lot of paging, the pagefile is only 1 small part of where that paging is happening so optimizing it is pointless.

This is somewhat correct and somewhat not. The main reason why you would want to keep the page file defragmented is because it allows the OS to catpure a dumpfile without significant errors. This is guidance historically given for server usage, specifically 03, but I don't see why it wouldn't apply to XP as well.

Even if that's true, which I don't think it is, why do you want a complete dump file? How many people here have the kernel debugging tools installed, let alone know how to use them?

Actually yes there is. A static pagefile creates the sizing upfront, instead of setting it too low and forcing the OS to grow it, creating more fragmentation. See one of my above quotes for the main reason why you want to try and avoid fragmentation. Guidance isn't really too clear on this topic, btw. It's more of a "theory" and best guess approach.

And if it grows, so what? I would bet that you won't notice the latency added by the growing anyway because you'll be paging to/from a lot of other files on the disk at the same time. Yes, it might add a little file fragmentation because there's no guarantee that there will be room directly at the end of the pagefile to grow into, but there's also no guarantee that the added fragmentation will hurt performance. In some circumstances fragmentation can help performance, part of the boot speed optimization in XP is a tool that analyzes your bootup process and intentionally fragments the files read on bootup so that they're broken up into the order that the data is paged in from disk. Because if the files were all contiguous the latency from seeking back and forth between them is higher than the sequential read required to read the properly fragmented data.

Even Microsoft doesn't recommend allowing Windows to handle it if you want it optimized. General consensus is not always correct.

MS also has many conflicting articles on their site, so you can't even consider them the authoritative source. That and most of their non-MSDN articles misuse the term virtual memory, which doesn't instill me with confidence in their documentation people.
 
Windows never 'continuously increases/decreases the pagefile size'. If necessary it will grow, but it won't shrink until you reboot. And really, if you're using the pagefile enough to cause it to grow, I doubt the time required to extend it will be noticable in between all of the other disk I/O going on.

Agreed.


Because there's no accurate way to gauge the performance affect. But the main thing to think about is that, if you have enough memory in your machine you will never use the pagefile so optimizing it is pointless. And on top of that, if you're low on memory and doing a lot of paging, the pagefile is only 1 small part of where that paging is happening so optimizing it is pointless.

Not true. The pagefile is used regardless of the amount of RAM somone has. The system maintains a measurement on when a page was last accessed. If it hasn't been accessed for a period X it is flushed to the page file. EVERYONE uses the page file regardless of the amount of RAM you have.

Even if that's true, which I don't think it is, why do you want a complete dump file? How many people here have the kernel debugging tools installed, let alone know how to use them?

Are you calling me out on this being true? Are you saying it's not, or do you not know?

And if it grows, so what? I would bet that you won't notice the latency added by the growing anyway because you'll be paging to/from a lot of other files on the disk at the same time. Yes, it might add a little file fragmentation because there's no guarantee that there will be room directly at the end of the pagefile to grow into, but there's also no guarantee that the added fragmentation will hurt performance. In some circumstances fragmentation can help performance, part of the boot speed optimization in XP is a tool that analyzes your bootup process and intentionally fragments the files read on bootup so that they're broken up into the order that the data is paged in from disk. Because if the files were all contiguous the latency from seeking back and forth between them is higher than the sequential read required to read the properly fragmented data.

I think this response is off somewhat. You may be factually correct on the boot process, but contextually you are wrong. Are you really trying to say that a fragmented page file is better than an unfragmented one? Or are you pulling stuff out of thin air? It's got to be the latter.

MS also has many conflicting articles on their site, so you can't even consider them the authoritative source. That and most of their non-MSDN articles misuse the term virtual memory, which doesn't instill me with confidence in their documentation people

The info they give for page file optimization is right on (moving to a seperate disk, create the pf and let the math handle the two). You don't want to listen to them.. no skin off my teeth. But then again, some of the stuff you are saying about the pagefile is incorrect as pointed out above. So should we not trust you now to?
 
Not true. The pagefile is used regardless of the amount of RAM somone has. The system maintains a measurement on when a page was last accessed. If it hasn't been accessed for a period X it is flushed to the page file. EVERYONE uses the page file regardless of the amount of RAM you have.

Not totally true, the pages are only flushed if there is memory pressure and pages that have another backing store besides the pagefile (i.e. executables, shared libraries, etc) will be evicted from memory without any pagefile I/O because they can be paged back in from the original file. In the general case, yes there will probably be some small amount of data in the pagefile, but it'll have virtually no real affect on performance.

Are you calling me out on this being true? Are you saying it's not, or do you not know?

Which part?

You may be factually correct on the boot process, but contextually you are wrong. Are you really trying to say that a fragmented page file is better than an unfragmented one? Or are you pulling stuff out of thin air? It's got to be the latter.

It might be better or it might not. In either case the margin of difference is going to so small that it'll be unnoticable. It's not something you can benchmark, the pagefile isn't accessed sequentially so whether it's contiguous or not is irrelevant. And on top of that, if you're using the pagefile you're also going to be doing a lot of paging from other files on disk, so the pagefile access is only a fraction of the I/O causing you to thiink "Man, this is slow.".

The info they give for page file optimization is right on (moving to a seperate disk, create the pf and let the math handle the two). You don't want to listen to them.. no skin off my teeth. But then again, some of the stuff you are saying about the pagefile is incorrect as pointed out above. So should we not trust you now to?

If you think what I've said is incorrect, where is the conclusive proof? There isn't going to be any because there's no way to measure the impact any of those optimizations have (minus putting the pagefile on it's a seperate physical drive which is obviously better), it's almost totally subjective. Most of the people claiming real performance gains haven't done any real testing, they just did 1 or 2 things and ran around in their placebo induced bliss telling people it was better without really understanding what they even changed.
 
Not totally true, the pages are only flushed if there is memory pressure and pages that have another backing store besides the pagefile (i.e. executables, shared libraries, etc) will be evicted from memory without any pagefile I/O because they can be paged back in from the original file. In the general case, yes there will probably be some small amount of data in the pagefile, but it'll have virtually no real affect on performance.

That's good clarification, but a bit contradictory to what I have read. Not saying you are wrong here, but when tracking the pagefile it is obvious it is used without a shortage physical memory occuring (I am assuming this is what you mean by "pressure", if not please correct me). Do you have any places you can point me towards that explain what you are saying in detail?

BTW... this is good conversation, let's not point fingers and call each other wrong (guilty as charged) but try to solve the conundrum or at the least provide clarification to all these damn pagefile management questions that pop up from time to time here.
 
Back
Top