Nothinman
Elite Member
- Sep 14, 2001
- 30,672
- 0
- 0
Strictly speaking Virtual Memory is always in operation and cannot be ?turned off?. What is meant by such wording is ?set the system to use no page file space at all?. This would waste a lot of the RAM. The reason is that when programs ask for an allocation of Virtual memory space, they may ask for a great deal more than they ever actually bring into use - the total may easily run to hundreds of megabytes. These addresses have to be assigned to somewhere by the system. If there is a page file available, the system can assign them to it - if there is not, they have to be assigned to RAM, locking it out from any actual use.
Generally allocations are done in a lazy fashion, meaning that you request, say, 100M worth of addresses and the system goes "Ok, sure" but doesn't do any real allocation. Once you start touching those addresses soft page faults will happen and the system will actually start allocating the memory and putting your data into it. So not having a pagefile available shouldn't have any affect on your ability to allocate memory that you're not going to use unless there's some checks or limitations in the NT kernel to prevent that. I know Linux lets you tweak how it handles overcommit but I'm not 100% sure about NT.
My point is, let's take a system with 1 HD, 1gb of memory, and XP running Photoshop. When you reach a point that the I/O on that single disk is dealing with the OS pages and also with Photoshop paging and manipulating a 10gb graphics file that is also on that same HD, you will likely see a slow down. So, if you add a seperate HD or HDs, you can manually distribute the paging or virtual memory space to drives that are not burdened with the OS.
Generally there is no I/O burden from the OS unless you start using parts of it that aren't already loaded into memory. Once something starts it and all of it's shared libraries will stay loaded into memory until it exits or memory is so tight that the kernel decides to evict it from memory.
But I'm not arguing that a separate physical drive for a pagefile in a memory anemic system won't be helpful. But seriously, if you're trying to work on a graphics file that's 10x larger than the amount of physical memory in the system you're going to expect slowdown no matter what.
So, I am not talking about if the page file will be used or if the OS or Photoshop manages the virtual memory space. My point is, that it does make sense to be able to relocate it and optimize it. Even Microsoft has how-to's on page file optimizations and suggests multiple drives.
To be pedantic the OS always manages virtual memory since userland processes never get to see the physical memory addresses. MS also has documentation that misuses the term 'virtual memory' and even some that contradict other articles because different docs were written by different people with various levels of knowledge.
For the most part you shouldn't even really be worrying about the pagefile, just let the OS handle it. If memory really is tight then moving it to it's own drive might help but it won't have anywhere near the affect of adding more memory.
This was semi-rhetorical. Of course they want to allow the user to be able to configure the page file for the same reasons MS allows and supports it.
But they shouldn't. One would think that they would rather offload that support to MS by just using the standard memory management facilities inside of Windows. All Adobe is doing by managing their own scratch space on top of the other memory management they have to do is adding more complexity.
