• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Virtual Memory

Setting them too high won't really do anything but waste disk space. The best thing to do is just leave it system managed.
 
You can't set virtual memory. It's always going to be 4 gigs on a x86 machine.

Microsoft screwed up the terms in the configuration stuff. It's called a page file, swap space, swap, swap file, not virtual memory. Virtual memory is something completely different.
 
The amount of virtual memory you need depends on yur applications. You need as much VM as all your application combined take up, ecept some data can be shared and the stupid Windows system monior doesn't display that.
 
Originally posted by: drag
You can't set virtual memory. It's always going to be 4 gigs on a x86 machine.

Microsoft screwed up the terms in the configuration stuff. It's called a page file, swap space, swap, swap file, not virtual memory. Virtual memory is something completely different.

:beer:
 
ecept some data can be shared and the stupid Windows system monior doesn't display that.

Would you mind explaining how you would go about implementing that then? Maybe the MS kernel developers just missed something...
 
Originally posted by: Nothinman
ecept some data can be shared and the stupid Windows system monior doesn't display that.

Would you mind explaining how you would go about implementing that then? Maybe the MS kernel developers just missed something...

If several programs map the same file, then all these instances take up the same space.

So if you have program A at 70 MB, program B at 90 MB, and then they both go and map the same 100 MB file, then the total virtual memory usage of the two programs is 70+90+100 = 260 MB, not 70+90+2*100 = 360 MB.

The Windows kernel developers did not miss that, they do it the same way.

But the people who wrote the monitor are hiding these details, probably not to "confuse the user". So you can't figure out exactly how much memory you'd need with the stock Windows tools.\, but the default Unix tools do provide this info.

And the term "swap" is inappropriate, swapping means totally kicking out a program, which is practically never happens on a modern OS.
 
If several programs map the same file, then all these instances take up the same space.

I know how shared memory works, what I meant was explain how you would keep track of that memory that MS hasn't thought of.

But the people who wrote the monitor are hiding these details, probably not to "confuse the user". So you can't figure out exactly how much memory you'd need with the stock Windows tools.\, but the default Unix tools do provide this info.

They're not hiding it, it's just not possible to measure in a timely fashion. Imagine you have a box with 200 processes on it and you want to count how many of those processes have a certain dll mapped into their virtual memory space, the only way to do that is to go through each processes page tables and see if it's mapped or not. The more processes and file mappings you have the slower the scan takes. You could keep track of which processes have a page mapped (I think this is one thing the Linux RMAP VM did) but it uses a lot of extra memory.

Unix tools show how much memory is shared in a process, but not what's in that shared space or to which processes it's common.
 
It is still useful to display how much memory is shared, even if it is too costly to say who shares what with whom.

If it is your own application you will usually have a good idea which application share spaces.

But you don't know how big they are. If you see the shared display in top then you have a much better idea than with just a single number for the whole process without giving any info about shared mappings. Insofar the info Unix top(1) provides is better, although not perfect.

Besides, under Linux and FreeBSD you can figure that out in /proc. Since shared memory is almost always file-backed you can walk the process list if you really want to know.

Here's how long it takes on Linux-2.6.7 on a dual 2.8 Xeon:
~/incoming(pls)1112% ctime wc /proc/*/maps > /dev/null
0:00.06 0.06 real 0.01 user 0.04 sys 98% CPU 0/128 faults

That is less than a tenth a second. While that might be too expensive for regular use in top(1), it is certainly not too expensive if you want to have a look to plan how much VM you need. The above just gave you all the names of the files backing up the non-anonymous spaces.
 
Insofar the info Unix top(1) provides is better, although not perfect.

True, you can probably get those numbers in perfmon but that's a lot of work for a little bit of information.

Here's how long it takes on Linux-2.6.7 on a dual 2.8 Xeon:

Sure, but you also have to go back through and figure out which maps are shared between who to get a good representation of real shared memory usage.
 
Back
Top