Vitural memory

Rottie

Diamond Member
Feb 10, 2002
4,795
2
81
I have 512MB memory on my rig how do I set it I remember long time ago someone told me to set at 1000 for better performance. Is 1000 about right size for 512Mb?
My rig use win2k pro
 

Guga

Member
Feb 21, 2003
74
0
0
Windows 2k and xp, create a swap file that is 1.5x to 3x the size of your RAM.

If you have enough disk space, fix it in 1500 mb.
If you have 2 disks, select the one that is not your system drive to hold your virtual mem.
 

Crusty

Lifer
Sep 30, 2001
12,684
2
81
Don't touch it, and let the programmers from Microsoft manage it for you. No matter what you think, or who you think you are, THEY know more about memory management then you(unless of course you ARE a programmer for microsoft).
 

Guga

Member
Feb 21, 2003
74
0
0
of course microsoft knows more about memory management than any one else, but they also know that not everyone has 100gb or more of disk space, and for that reason they turn the swap file variable in size.

there are lots of technics to improve and tuning your OS, and one of them is to move your swap file to other disk than the system disk. One other problem you can have with variable swap files is fragmentation, so turn the virtual memory file fixed in size just have advantages. Like I said, windows can vary the size of the virtual memory between 1.5x and 3x the size of your ram. If you fix it in the max size windows can use it where in the world can that be bad?? You just are "wasting" permanently that space.
 

Crusty

Lifer
Sep 30, 2001
12,684
2
81
Why would you want to waste space when you don't need to? It doesn't matter if windows has to resize the swap file on the fly, if your paging that much where you need a bigger swap file you are not going to notice the .1 seconds needed to resize the swap file.


As far as fragmentation, he swap file is accessed randomly, so by nature it is fragmented internally so who cares if the swap files gets fragmented?
 

Guga

Member
Feb 21, 2003
74
0
0
As far as fragmentation, he swap file is accessed randomly, so by nature it is fragmented internally so who cares if the swap files gets fragmented

It's microsoft itself who recomends for example change the location of your swap file to another disk, and you know why?? Because you notice yes!!! Disk access are tremendous more slower than memory access, so every byte counts.
The fragmentation can be noted too if the block that its been requested for the cpu was breaking apart all your disk.

If you do not like to mess with your computer configuration, we have to accepted, but dont be so mad because others want to. I said once and say it again. Operating systems are not tuned by default. They are supose to be stable across several plataforms and microsoft developers know that. You can believe that with several changes you can really improve your OS perfomance.
 

Crusty

Lifer
Sep 30, 2001
12,684
2
81
If you think that they will really improve performance, then lets see some tests. Some real data that shows that it improves performance.
 

Crusty

Lifer
Sep 30, 2001
12,684
2
81
The reason I am not defending my point of view is because a thread like this gets created once or twice and month and turns into a giant flame fest. Most people don't need these "tweaks", if you can call them that. Most of the "tweaks", are basic steps that you should be doing anyways to keep your computer running speedy.

But I didn't ask for that link, I asked for some tests that SHOW that what you say improves your pc performance. Not some other person naming of a bunch of stuff that doesn't really do much. If you have a computer that you need to disable all of the XP guid stuff, then you souldn't be running XP. 2000 will run MUCH better.

So like I said before, show me some benchmarks/tests that show an improvement in speed by "tweaking" your swap file.
 

Guga

Member
Feb 21, 2003
74
0
0
last reply to this post.

Until now I'm just answering back for one reason only.
I wont give you test or benchs, but what I can tell you is after a fresh install of Windows XP, and some "tweaks" I feel the sistem more smooth and responsive..
Is it only a sensation? Well.. I really dont care. My system is rock stable, I dont reinstall XP since I change my motherboard (almost 2 years from now). I like it the way it is, so I just give a opinion. One thing I dont believe is that is slower, so...

I just said Microsoft recomend move your swap to another disk. And I refer to if you have your disk to fragmented you can notice some perfomance loss.

For some reason some computers with the same specs run faster than others, even if you dont overclock them. This forum must be full of people that understand what I mean.

 

nweaver

Diamond Member
Jan 21, 2001
6,813
1
0
test setup: Windows 2k3 server with Exchange 2k3. System managed page file (MS knows best, right)

2 months later, mail store is unmounted, no way to remount it. (rebooted, cleared logs, etc). check defrag, PF is in 2700 fragments. Defrag drive, reboot, mail store is mounted like normal.

Setup 2: Win2k3 server with exchange 2k3, same server h/w, same disks, ram (same box!!). Only differenece: Set pagefile min/max to same values on OS install. 8 months later, no problems, page file fragments are reported as 1.


So, in case one, your exchange server doesn't run, in case 2 it does. That's a pretty big increase :p
 

TGS

Golden Member
May 3, 2005
1,849
0
0
Do you really need a microsoft document to tell you that fragmented file systems reduce disk performance?


Seriously..?
 

Phoenix86

Lifer
May 21, 2003
14,644
10
81
As far as fragmentation, he swap file is accessed randomly, so by nature it is fragmented internally so who cares if the swap files gets fragmented?
A fragemented PF can cause system slow downs. I have also seen a heavly fragmented PF crash the system.

Why?

Sure the data is accessed 4K at a time, if it's accessing more than 4K (which is very common) then the location *IS* important.

Now, none of this matters if you're only using 200MB memory on a 512MB RAM system. So, the important question is: What is the difference between memory useage, and available memroy?

That'll tell you how much PF you're using, and give you an idea on how much you need.

If you need a lot, chances are the PF has grown several times (each causing file fragments).

The only real problem with system managed is it's nature to fragment the PF. If you have a defragmenter that'll touch the PF, this is kinda moot.

1.5-3x RAM is a bad formula because it doesn't account for actual usage.

Default isn't always the best, regardless of how much MS knows because ultimatly they don't know the most important factor: usage.
 

Guga

Member
Feb 21, 2003
74
0
0
Do you really need a microsoft document to tell you that fragmented file systems reduce disk performance?

besides the fact that I was refering to swap file fragmentation and never said that I read microsoft documentation about defragmentation I will say you need glasses
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
The defaults are fine. I've noticed better performance with them (anecdotally of course, I haven't gotten the time to benchmark anything yet). There are people that are tweakers, and they should tweak to their hearts content. I think they should be required to know the difference between Virtual Memory and the Pagefile though. ;)

Generally, messing with that stuff won't help you. If there are any increases at all, they won't be worth the time and effort.
 

Crusty

Lifer
Sep 30, 2001
12,684
2
81
Originally posted by: n0cmonkey
The defaults are fine. I've noticed better performance with them (anecdotally of course, I haven't gotten the time to benchmark anything yet). There are people that are tweakers, and they should tweak to their hearts content. I think they should be required to know the difference between Virtual Memory and the Pagefile though. ;)

Generally, messing with that stuff won't help you. If there are any increases at all, they won't be worth the time and effort.

I was trying to decide whether or not to bring that up.. but since you have..

pagefile ! = virtual memory
 

TGS

Golden Member
May 3, 2005
1,849
0
0
Originally posted by: Guga
Do you really need a microsoft document to tell you that fragmented file systems reduce disk performance?

besides the fact that I was refering to swap file fragmentation and never said that I read microsoft documentation about defragmentation I will say you need glasses

That was a response to mcrusty...

I'm talking about the pagefile, which the OP is referencing. If you have a fragmented pagefile the disk actuator will have to accomplish more movement to fetch data as it is paged back into main memory. That is a uncontested fact.

If you force the pagefile to be a contigious file the actuator is not going to have to move as much to pull blocks of sequential written data. The entire, purpose of defragmenting a disk will accomplish this same thing. Now of course that does not mean within that set pagefile size data will not fragment. It will ensure the data is being written to common areas to were once again the actuator is not moving as far if the pagefile was spread over multiple areas of the disk. MS recommends moving your pagefile to another disk to gain access to another acutuator.

Here's the MS article on VM and PFs


Again to Mcrusty, MS does not know more than me about *my* setup. Their generic cookie cutter defaults are given based on a basic set of criterior of information that is pulled from the system. I mean how many people on a home network are running a MTU of 1500, even if there ISP is only running 1492 or less. Fragmented packets there boss. Though I'm sure I should leave that alone because MS knows best about how my system is being used.

I could also just install Exchange and run optimizer and leave everything alone. MS knows best and knows how my backend storage is laid out. Damn why don't they enable PAE extensions by default on my 8GB memory systems. Oh thats right they DON'T do a thoroughly scan of my system and merely provide the settings most likely to work correctly. So basically once you install windows, optimizing your settings is a waste.... That really doesn't make any damn sense.
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
The reason the defaults are the way they are is because they are the best solution for the most amount of people. They work for the most amount of people. Most people don't tweak. It works out nicely.

If they're tweaking enough to need to mess with these settings, they should probably know what they are doing.
 

Phoenix86

Lifer
May 21, 2003
14,644
10
81
Originally posted by: n0cmonkey
The reason the defaults are the way they are is because they are the best solution for the most amount of people. They work for the most amount of people. Most people don't tweak. It works out nicely.

If they're tweaking enough to need to mess with these settings, they should probably know what they are doing.
QFT, this is the #1 reason default isn't always the best option, no matter "how much MS knows".

Defaults are set for compatibility, not performance.

See also, the TCP hack for fast connections in 9x. Somewhere around the 2K/XP line, they switched the default to the hack because more people were using faster connections vs. modems to connect.
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: Phoenix86
Originally posted by: n0cmonkey
The reason the defaults are the way they are is because they are the best solution for the most amount of people. They work for the most amount of people. Most people don't tweak. It works out nicely.

If they're tweaking enough to need to mess with these settings, they should probably know what they are doing.
QFT, this is the #1 reason default isn't always the best option, no matter "how much MS knows".

Defaults are set for compatibility, not performance.

See also, the TCP hack for fast connections in 9x. Somewhere around the 2K/XP line, they switched the default to the hack because more people were using faster connections vs. modems to connect.

I'm not saying this is a reason not to tweak. It deserves more investigation whether these "stupid pagefile tricks" actually have any benefit what-so-ever. ;)
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
If they're tweaking enough to need to mess with these settings, they should probably know what they are doing.

The sad thing is that a lot of Windows users think they know more than they do, so they try to change settings without actually knowing the repercussions. And on top of it all most of the tweakers have at the very least 512M of memory, usually 1G or more, so tweaking the pagefile is a huge waste of time for them.

Defaults are set for compatibility, not performance.

True, but at the same time the performance of the defaults isn't ignored. I guarantee MS spends a lot of time profiling their code to make it work well without user intervention. Sure the best solution is to put the pagefile on a seperate physical drive on a seperate channel (assuming IDE) but it would be a waste of time to develop an algorigthm to have the Windows installer decide which drive is the best for the pagefile at installation time so they just put it alongside Windows.
 

TGS

Golden Member
May 3, 2005
1,849
0
0
Originally posted by: n0cmonkey
The reason the defaults are the way they are is because they are the best solution for the most amount of people. They work for the most amount of people. Most people don't tweak. It works out nicely.

If they're tweaking enough to need to mess with these settings, they should probably know what they are doing.



merely provide the settings most likely to work correctly.




Originally posted by: n0cmonkey
Does the configured MTU really matter with Path MTU Discovery?

Hmmm




edit: just more off-topic here.
 

Phoenix86

Lifer
May 21, 2003
14,644
10
81
Originally posted by: n0cmonkey
I'm not saying this is a reason not to tweak. It deserves more investigation whether these "stupid pagefile tricks" actually have any benefit what-so-ever. ;)
I'm all for benchmarks and baselines something few tweakers even know of. :(
 

nweaver

Diamond Member
Jan 21, 2001
6,813
1
0
I don't know that it has anything to do with speed, but reliability/stability of the system long term.