• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Disabled paging file and diggin it.

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Originally posted by: VirtualLarry
You people that seem to think that the pagefile is only used as a "spillover", when all available physical RAM is allocated by kernel+applications, have the wrong view of how NT's VM paging works. The pagefile is used as the primary backing-store for all anonymous allocations. Physical RAM is merely a cache (one layer of the hierarchial memory-model) for the pagefile. When an application allocates anonymous memory, it allocates it DIRECTLY out of the pagefile, NOT out of physical RAM.(*) Therefore, pragmatic use of the pagefile would suggest that pagefile size never be set smaller than physical RAM. A pagefile size of 1.5x-2x of physical RAM is generally recommended.

When you say "generally" recommended, I assume you mean not in high-RAM situations? The way I take your explanation though, the old rule-of-thumb would seemingly be applicable to low and high RAM situations alike, yet the article linked in the KB near the top of this thread states:

A common recommendation is to make the page file 1.5 times the size of the installed RAM. This recommendation makes sense only for computers with small amounts of RAM (256 MB or less). For example, there is usually not much point in allocating a page file that is 3 GB if the computer has 2 GB of RAM. The objective in such RAM rich systems is to avoid using the pagefile at all by providing sufficient RAM that all virtual memory can be in RAM all the time. If the virtual memory in use exceeds the amount of installed RAM, performance will suffer and having a larger pagefile will not help this situation.

The fairly disappointing part, at least for me, is that even in these "RAM rich systems" it doesn't seem possible to "avoid using the pagefile at all" short of doing it by force, which as we learn in this thread, is not something recommended.
 
Originally posted by: rseiler
Originally posted by: VirtualLarry
You people that seem to think that the pagefile is only used as a "spillover", when all available physical RAM is allocated by kernel+applications, have the wrong view of how NT's VM paging works. The pagefile is used as the primary backing-store for all anonymous allocations. Physical RAM is merely a cache (one layer of the hierarchial memory-model) for the pagefile. When an application allocates anonymous memory, it allocates it DIRECTLY out of the pagefile, NOT out of physical RAM.(*) Therefore, pragmatic use of the pagefile would suggest that pagefile size never be set smaller than physical RAM. A pagefile size of 1.5x-2x of physical RAM is generally recommended.

When you say "generally" recommended, I assume you mean not in high-RAM situations? The way I take your explanation though, the old rule-of-thumb would seemingly be applicable to low and high RAM situations alike, yet the article linked in the KB near the top of this thread states:

A common recommendation is to make the page file 1.5 times the size of the installed RAM. This recommendation makes sense only for computers with small amounts of RAM (256 MB or less). For example, there is usually not much point in allocating a page file that is 3 GB if the computer has 2 GB of RAM. The objective in such RAM rich systems is to avoid using the pagefile at all by providing sufficient RAM that all virtual memory can be in RAM all the time. If the virtual memory in use exceeds the amount of installed RAM, performance will suffer and having a larger pagefile will not help this situation.

The fairly disappointing part, at least for me, is that even in these "RAM rich systems" it doesn't seem possible to "avoid using the pagefile at all" short of doing it by force, which as we learn in this thread, is not something recommended.

The keyword there is IF. The presume that you may exceed the RAM capabilites of the machine, and leave in that information so you know why system slowdowns can be expected when you pass your RAM threshold. There is no hardline message saying, *when* you exceed the amount of installed RAM, it states *if*.
 
The fairly disappointing part, at least for me, is that even in these "RAM rich systems" it doesn't seem possible to "avoid using the pagefile at all" short of doing it by force, which as we learn in this thread, is not something recommended.

Avoiding the pagefile isn't a good thing, in some cases it'll help overall performance because unused pages will be saved to disk and reused for filesystem cache.
 
Originally posted by: Nothinman
The fairly disappointing part, at least for me, is that even in these "RAM rich systems" it doesn't seem possible to "avoid using the pagefile at all" short of doing it by force, which as we learn in this thread, is not something recommended.

Avoiding the pagefile isn't a good thing, in some cases it'll help overall performance because unused pages will be saved to disk and reused for filesystem cache.
Exactly - proper usage of the pagefile can actually *increase* overall performance. A concept that some otherwise intellegent people in this forum didn't seem to grasp too well the last time that I tried to argue that point.
 
A common recommendation is to make the page file 1.5 times the size of the installed RAM. This recommendation makes sense only for computers with small amounts of RAM (256 MB or less). For example, there is usually not much point in allocating a page file that is 3 GB if the computer has 2 GB of RAM.
Quite the contrary, one should definately do that, at least if it is possible for the commit charge to reach over 2GB.

The objective in such RAM rich systems is to avoid using the pagefile at all by providing sufficient RAM that all virtual memory can be in RAM all the time. If the virtual memory in use exceeds the amount of installed RAM, performance will suffer and having a larger pagefile will not help this situation.
Again, that presumes a "spillover" model, which NT's VM/paging is not. Indeed, if "virtual memory in use" exceeds the amount of installed RAM - why by golly, that's why we have pageable VM systems in the first place! And if you followed the previous advice in that note, you would be screwed, because you would be out of (virtual) memory. But not if you followed the advice to allocate 1.5x-2x pagefile space compared to physical RAM.

In fact, let's consider the performance issues, if you have a 3GB pagefile, 2GB of RAM, and a task that accesses a number of large on-disk files, and has a large (1GB, say) set of initialization data, that is only used at startup and rarely thereafter.

You will get maximum overall performance, likely, by writing out that 1GB worth of startup data to the pagefile, and then you have an additional 1GB worth of physical RAM to use for filesystem caching for loading those large files for processing.

A contrived example, certainly, but it simply tries to point out that keeping stale pages in physical RAM is ... no different than wasted RAM. Paging out to disk turns "wasted" pages into "available" pages - that's a good thing, and why a pagefile can *increase* performance in the long run! (Also why setting DisablePagingExecutive is also not a performance benefit in most cases.)

However, increasing the size of the filesystem cache, like any cache, reaches a point of diminishing returns. NT does seem to have a bit of a problem being over-zealous in paging out active applications just to free up RAM to add to the filesystem cache. On a server, where disk access patterns approach random, more disk cache is a good thing, but on a desktop where most I/O tasks are sequential and/or localized, there is a bit less benefit from growing the filesystem cache to huge proportions. There is also a need for higher interactive latency on desktops.

Another thing that may be overlooked - ATI's video drivers store various things in pagable system RAM, so if something is paging madly trying to re-draw - it may be the video drivers directly, and not the fault of the OS at all.
 
So the long and short of this thread is:

Normal Page File Use = Good (may increase performance)
Lack of Memory Spillover = Bad (decreases overall performance)
Turning off page file altogether = Worst (arguable minor performance increase with potential lockups)
 
Back
Top