• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

SSD and Swap File

tafinho

Junior Member
Jun 2, 2010
2
0
0
Taking into account the limited R/W cycles for an SSD drive, what is the impact of having the Swap file on a SSD?

This is mainly relevant on the last review of a Laptop with a SSD.

Does anyone has any insight on it?
I think it deserves an article on the subject....
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
Ah and once again..

First of all there's no really a limited read cycle for flash, so it's only about writes, second no you won't run out of write cycles with desktop usage (servers are another topic) for more than 5 years.
For a more detailed discussion (including all those fun facts about 10k write cycles per cell, wear leveling and so on) just search around, we have that topic at least once a week it seems ;)
But the short answer is, no matter if you have a swap file on your ssd, don't disable indexing and whatnot (or all those other dubious suggestions how to "safe your ssd") you won't run out of write cycles.
 
Last edited:

tafinho

Junior Member
Jun 2, 2010
2
0
0
Ah and once again..

First of all there's no really a limited read cycle for flash, so it's only about writes, second no you won't run out of write cycles with desktop usage (servers are another topic) for more than 5 years.
For a more detailed discussion (including all those fun facts about 10k write cycles per cell, wear leveling and so on) just search around, we have that topic at least once a week it seems ;)
But the short answer is, no matter if you have a swap file on your ssd, don't disable indexing and whatnot (or all those other dubious suggestions how to "safe your ssd") you won't run out of write cycles.

I have no doubts that it may take years for problems to arise. The question is, what which point performance starts to suffer?

I easily do around 40GB a day on swapfile and such, only for hibernate and suspends.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
Hibernating writes to hiberfil.sys and has absolutely nothing to do with the pagefile (not sure about Linux, haven't used one for a laptop). So there's no point at which performance starts to suffer, it's an either or. So just look at the reviews where they compare SSDs to HDDs, because that's the performance delta you'll see.
 

SimMike2

Platinum Member
Aug 15, 2000
2,577
1
81
I refuse to baby my SSD and treat it like a problem hard drive. If it wants to be a hard drive, let it be a hard drive. Once it is initially setup correctly, proper alignment and no defrag, I let Windows 7 manage it. Once a week I run the Intel SSD optimizer thing. I hibernate whenever I feel like shutting down that way. It isn't going to last forever anyway.

For reference, I have the 80GB version which is very adequate for my boot drive with all my programs. If I had a 40GB or smaller, all the SSD "optimization" stuff might make sense to save space.

I really hesitate to move any OS or program stuff to slower hard drives, just to save wear and tear on the SSD. It would be like putting ugly, uncomfortable plastic on your new couch. I do store lots of data files on regular hard drives, for instance lots of video and audio files. I also store my huge download directory on a regular hard drive.
 
Last edited:

flamenko

Senior member
Apr 25, 2010
349
0
0
www.thessdreview.com
You can carry this one step further if you have sufficient ram (4Gb) and ask what a swap file would do for you in any case? Consumers are so tied to the whole swap file theory without understanding the needs of their system and software.
 

sygyzy

Lifer
Oct 21, 2000
14,001
4
76
I keep hearing different things about swapfiles and SSD's. Some say to disable it completely. Others say to keep a very small amount (512MB?) on the SSD because some programs won't run if they can't find a swap. And others say to make it 1-2x your system memory (unless you have 8GB, in which case no swap) and store it on the SSD. Lastly, swapfile on a different physical (mechanical) hard drive.

I am so confused. Right now I have 4GB of RAM and made a 6GB swapfile to store on my D: which is a mechanical SATA drive.
 

LokutusofBorg

Golden Member
Mar 20, 2001
1,065
0
76
The correct answer is to use a modern OS (Win7) and let it manage the swap file. There's no reason to muck with it, unless you're trying to install to a 20GB drive/partition or have serious OCD tweaker syndrome.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
There's no such thing as a 100% correct answer, because nobody here knows how much you're using the swapfile. 4gb for laptop probably means that you've never touched it at all.
You can test that rather easily.. disable it or make it tiny (say 256mb) and just work with your pc. You should get a warning if you run out of RAM (well or a BSOD for that matter).

Since the standard Win7 size is rather large (do they still ues 1.5x RAM?) and in 99.9% of all cases wasted, actually "just let it be" is probably one of the worse possible things to do.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
You can carry this one step further if you have sufficient ram (4Gb) and ask what a swap file would do for you in any case? Consumers are so tied to the whole swap file theory without understanding the needs of their system and software.

It provides you with a safety net and allows Windows to page things out in order to make more room in memory for other things. Both are obvious advantages regardless of how much memory you have.

sygyzy said:
I keep hearing different things about swapfiles and SSD's. Some say to disable it completely. Others say to keep a very small amount (512MB?) on the SSD because some programs won't run if they can't find a swap. And others say to make it 1-2x your system memory (unless you have 8GB, in which case no swap) and store it on the SSD. Lastly, swapfile on a different physical (mechanical) hard drive.

Because lots of people give advice without understanding how virtual memory works. Anyone who recommends disabling it completely should be ignored from that point on. Disabling the pagefile just forces the Windows memory manager into a tougher situation because it has one less tool at it's disposal.

The easy answer is to just let it be and leave Windows to manage it. This is correct in virtually all cases. If you really want, you can put one on a secondary drive to save space, but you'll still want to leave a small one on the system drive just in case. If Windows does need to use the pagefile it'll select the one on the drive with the lowest amount of I/O currently happening. That's all the tweaking you should ever need to do with it.

sygyzy said:
Well in this case the SSD is 40GB so not much larger.

Which is why I find all of this so funny. A year or so ago every time someone bitched about how big Windows was getting the reply was "Drives are so big and cheap why do you care?" and now SSDs aren't so big and cheap everyone's back to complaining about the size of Windows. =)

Voo said:
Since the standard Win7 size is rather large (do they still ues 1.5x RAM?) and in 99.9% of all cases wasted, actually "just let it be" is probably one of the worse possible things to do.

It's not the worst possible, worse than wasting a few G of space is having a system that doesn't work reliably and that's what you'll get if you try to completely disable the pagefile.
 

sygyzy

Lifer
Oct 21, 2000
14,001
4
76
I wasn't complaining about the size of Windows. I was saying that my SSD OS drive is only 40GB and half of that is taken up by Win 7 and some Program Files. I don't have room to spare on that drive for a swap. I would prefer to put it on a different HD. I wanted to know if I should use one at all and if so, would putting it a mechanical drive be ok.
 

Golgatha

Lifer
Jul 18, 2003
12,400
1,076
126
After considering all the options, I decided to make my swap file a static 4GB one. Basically my reasoning was any old program archaic enough to actually use the swap file will be limited to 2GB of memory anyway and 4GB gives a bit of wiggle room above that limitation. I keep it static so that the space isn't being constantly resized.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
Because lots of people give advice without understanding how virtual memory works. Anyone who recommends disabling it completely should be ignored from that point on. Disabling the pagefile just forces the Windows memory manager into a tougher situation because it has one less tool at it's disposal.
Thank you, but I think I do understand how that stuff works and I'd love to hear how exactly the page file helps as long as you don't run out of memory (or more exactly low with lots of dirty pages that can't be written back)
Because after all the pagefile lets you a) allocate more virtual memory and b) keep some memory free for new tasks while writing old modified data into it, not more, not less.. and obviously both things are only useful if you run short of memory.

And I think the Win7 standard configuration is still 1.5x of the RAM untill you hit an upper limit, right? So for 4gb RAM that'd be 6gb diskspace or in other words 16.1% of the available space of the SSD. Since he's using a laptop "just put it on another drive" is probably no option, so every GB of pagefile costs him disk space he can't use for something else.

If you can't show a example where the pagefile brings an advantage in the case where you don't run low of memory, I don't see any reason to change my stance on that, but I'm always curious to learn something new. If you run some old applications who rely on the pagefile (no idea who'd write something like that, ugh) then it may be a good idea to keep a small piece, but other than that..


PS: As a nice proof of concept, I've been running this machine (4gb RAM, 64bit Win7) for several days now without rebooting, compiled several larger projects, edited lots of pictures in photoshop and had a VM with Linux running, besides using it for the usual stuff like browsing the web and gaming and my "allowed to grow till 2gb" page file has still its initial size of 512mb RAM. Obviously the Win7 scheduler doesn't like me, since it doesn't want to give me all those advantages of storing memory pages on orders of magnitudes slower disks ;)
 
Last edited:

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
I wasn't complaining about the size of Windows. I was saying that my SSD OS drive is only 40GB and half of that is taken up by Win 7 and some Program Files. I don't have room to spare on that drive for a swap. I would prefer to put it on a different HD. I wanted to know if I should use one at all and if so, would putting it a mechanical drive be ok.

Exactly, and if Windows was smaller it wouldn't be an issue.

But yes, you should have one regardless of how much memory you have and putting it on a secondary drive is fine but it's also generally a good idea to have a small one (300M maybe) on the OS drive as well. If you don't want dumps of kernel memory during STOP errors you could get by with even smaller too.

Golgatha said:
After considering all the options, I decided to make my swap file a static 4GB one. Basically my reasoning was any old program archaic enough to actually use the swap file will be limited to 2GB of memory anyway and 4GB gives a bit of wiggle room above that limitation. I keep it static so that the space isn't being constantly resized.

Except the reasoning is wrong because userland processes don't have control over whether or not any memory they've allocated ever hits the pagefile or not. They just allocate memory and let the OS do it's thing. Some apps like to be special (or because they were developed on an OS with shit memory management like Mac OS 9) and do their own memory management via mmap'd files, anonymous mappings and such like Adobe PS, but those are extremely rare.

And the pagefile is never being "constantly resized". It's grown if it starts getting full and you've got the max larger than the min, but it's not shrunk until a reboot.

Get a copy of Inside Windows and read the chapters on memory management, it'll clear up a lot of the misconceptions you have about the pagefile, virtual memory and how they're related but not the same thing at all.

Voo said:
Thank you, but I think I do understand how that stuff works and I'd love to hear how exactly the page file helps as long as you don't run out of memory.
Because after all the pagefile lets you a) allocate more virtual memory and b) keep some memory free for new tasks while writing old modified data into it, not more, not less.. and obviously both things are only useful if you run short of memory.

The pagefile doesn't affect the amount of virtual memory available at all. Every 32-bit process has 2G available and for 64-bit processes it's some odd TB number that I don't feel like looking up right now, but both have the same exact amount of VM available regardless of the amount of physical memory installed. Removing the safety net provided by a pagefile serves no purpose other than saving a little bit of disk space.

Having a pagefile helps even when you're not low on memory because it lets Windows store modified pages that have no other on-disk backing store to make room for more data. If you've got pages in memory that haven't been referenced in hours or even days why would you want to force them to stay in memory? With Vista and Win7 this should be even more apparent as SuperFetch will watch your usage patterns and evict and preload data dependent on your usage patterns.

Voo said:
And I think the Win7 standard configuration is still 1.5x of the RAM untill you hit an upper limit, right? So for 4gb RAM that'd be 6gb diskspace or in other words 16.1% of the available space of the SSD.

Close but not quite. I just checked my laptop and the pagefile is setup with 1x for the min and 1.5x for the max. So for a 40G SSD it would be 10%, however using Windows is your choice and spending that much space for a reliable system is part of the cost.

Voo said:
If you can't show a example where the pagefile brings an advantage in the case where you don't run low of memory, the pagefile is wasted as long as you've got enough memory. If you run some old applications who rely on the pagefile in either case then it may be a good idea to keep a small piece, but other than that.

It's not really wasted and if you think that pagefile usage is only relegated to old applications then you really don't know how virtual memory actually works or how the pagefile fits into it.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
The pagefile doesn't affect the amount of virtual memory available at all. Every 32-bit process has 2G available and for 64-bit processes it's some odd TB number that I don't feel like looking up right now, but both have the same exact amount of VM available regardless of the amount of physical memory installed. Removing the safety net provided by a pagefile serves no purpose other than saving a little bit of disk space.
Ahm no. That may work in Linux, but windows (and unix) doesn't let you overcommit memory.

Just tested this small juwel of code in VS08 compiled as a 64bit exe (and a small loop afterwards to make sure the compiler doesn't optimize it away).

int *test = (int*) malloc(8* 1024*1024*1024);
if(!test) {
printf("could not allocate memory\n");
return 1;
}
I get a nice fine error message when trying to allocate 8gb memory, ups.

Having a pagefile helps even when you're not low on memory because it lets Windows store modified pages that have no other on-disk backing store to make room for more data. If you've got pages in memory that haven't been referenced in hours or even days why would you want to force them to stay in memory? With Vista and Win7 this should be even more apparent as SuperFetch will watch your usage patterns and evict and preload data dependent on your usage patterns.
Ahem you missed the part with "dirty". Only a small subset of all pages will be modified and can't be stored, you don't need a pagefile to do that with unmodified pages.
And yes as I said if you are low on memory you'll be able to write dirty pages to the pagefile and that will help, but if you're not low on memory why bother.
Even if superfetch couldn't load one of the more infrequently used applications (and with 4gb especially for a laptop you've got plenty of ram for that usually), I think I'd prefer 10% more space to install thoes rarely needed applications on, especially since those will load fast nevertheless and superfetch won't even help anything shortly after the start when you most probably will start the majority of apps.

If you think 10% space is worth being able to start applications even faster on your SSD, but only several minutes after the start then say it and nobody will disagree with you, because that's a fact - the question remains how large that bonus is -, but we can discuss that and people can decide if they think that's worth that much space on such a small SSD.
Oh and if you'd stop to tell everyone who doesn't agree with you that they don't understand how virtual memory works, I think that would also help.. especially since you also aren't perfect *points upwards to the overcommiting example* (not that I blame you, that's something which is hardly important most of the time and you seem to be more of an linux guy)

It's not really wasted and if you think that pagefile usage is only relegated to old applications then you really don't know how virtual memory actually works or how the pagefile fits into it.
Not what I said. Some old applications use the pagefile in a rather strange way and you'll get a BSOD if you don't have one. That's the only thing that makes the system more reliable and I'm not sure how many people really run stuff like that (someone mentioned a game last time we had that discussion).
 
Last edited:

deimos3428

Senior member
Mar 6, 2009
697
0
0
Ahm no. That may work in Linux, but windows (and unix) doesn't let you overcommit memory.
Yeah, you can do that in 2.6 kernels, and it's not my favorite thing about Linux.

Just tested this small juwel of code in VS08 compiled as a 64bit exe (and a small loop afterwards to make sure the compiler doesn't optimize it away).

int *test = (int*) malloc(8* 1024*1024*1024);
if(!test) {
printf("could not allocate memory\n");
return 1;
}
I get a nice fine error message when trying to allocate 8gb memory, ups.
Even with overcommitting enabled, linux may prevent you from doing a single large malloc like that. The maximum allocation permitted is (physical ram + swap) * the value in /proc/sys/vm/overcommit_ratio (in percent; default is 50.) You can (and probably should) disable overcommitting entirely if, for example, you're not using large amounts of swap. Nobody likes the OOMkiller randomly taking out processes.

If you're hitting swap on a regular basis, the system is either:

a) legitimately starved for physical memory, or
b) running a process with a memory leak, or
c) using an algorithm that is paging to disk needlessly.

Unless you're short on cash (or DIMM slots), swap makes zero sense as a solution to the first -- buy more physical RAM. Swap cannot solve the second case at all, just delay the inevitable. And it is in fact the cause of the third case, making it's usefulness dubious at best.

While I can see a case being made for a small swap file to mitigate disaster in the event of a minor unforeseen increase in memory utilization, the windows defaults are a bit absurd and based on the obsolete "1.5x RAM" rule of thumb. The problem is the "rule" doesn't scale. It's extremely unlikely a 4GB system will suddenly require 6GB of additional memory out of the blue. If you had that sort of memory requirement, you'd be running a 16GB system in the first place. Let's face it, most of us have too much and are stuffing it with file cache.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
Yeah, you can do that in 2.6 kernels, and it's not my favorite thing about Linux.
Oh yeah, mine neither, though it's arguably enormously handy for VMs , but on a normal desktop system, horrible, horrible, especially if the OOMkiller starts flipping coins and kills much more important processes.

Even with overcommitting enabled, linux may prevent you from doing a single large malloc like that. The maximum allocation permitted is (physical ram + swap) * the value in /proc/sys/vm/overcommit_ratio (in percent; default is 50.) You can (and probably should) disable overcommitting entirely if, for example, you're not using large amounts of swap. Nobody likes the OOMkiller randomly taking out processes.
Yeah I think they added that with 2.5 or something, right? Just didn't know that it was the default configuration - but for a desktop OS that's obviously a much more sensible configuration then the old one.

And I agree with the rest of your post, never understood that 1.5 rule (though Nothinman says they reduced that to 1x which is at least a bit more sensible) - I mean if I have only 512mb RAM chances are good that I could need a 2-3gb swap file (not that I'd ever want to run any modern OS on 512mb), but yeah for a 4gb system that's highly unlikely..
 

deimos3428

Senior member
Mar 6, 2009
697
0
0
Oh yeah, mine neither, though it's arguably enormously handy for VMs
Depends on what you're trying to achieve by virtualizing, I suppose. If you're trying to make the most out of a small amount of hardware, swap makes make sense as it's goal is similar. At my work, we have large clusters hosting redundant virtualized server farms and we don't use swap at all. (For that matter, we don't use local disk at all.) Occasionally we need to up the memory allocation of the VMs, but that's a relatively rare and pain-free process, and one that is caught long before the changes are rolled to production.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
Well I see the advantage in overcommitting especially if you have lots of VMs that are often idle and need not much memory usually, since you get the possibility to allocate Xgb memory to each VM without having to back that up with that much real memory, that would probably go to waste most of the time.(gambling with probabilities if you will)

On the other hand if all your VMs need a rather constant amount of memory, overcommiting is useless, so it really depends on the workload, but contrary to desktop usage I can see some scenarios where overcommiting could be useful ;)
 

deimos3428

Senior member
Mar 6, 2009
697
0
0
Well I see the advantage in overcommitting especially if you have lots of VMs that are often idle and need not much memory usually, since you get the possibility to allocate Xgb memory to each VM without having to back that up with that much real memory, that would probably go to waste most of the time.(gambling with probabilities if you will)
No real argument with you; just wanted to note we're discussing two different types of overcommitting. You're referring to overcommitting the memory allocated to the VMs, whereas I was referring to a VM running Linux which overcommits to its processes just like any other host. It's unfortunate that the same term is used for two subtly different things.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Voo said:
Ahem you missed the part with "dirty". Only a small subset of all pages will be modified and can't be stored, you don't need a pagefile to do that with unmodified pages.

Depends on your workload and the behavior of the app. But at least there's now the MS/Sysinternals RAMMap tool to show you how you're memory's being used.

Voo said:
Oh and if you'd stop to tell everyone who doesn't agree with you that they don't understand how virtual memory works, I think that would also help.. especially since you also aren't perfect *points upwards to the overcommiting example* (not that I blame you, that's something which is hardly important most of the time and you seem to be more of an linux guy)

Except that 99% of people don't understand how it works and regardless they still spew misinformation.

And as you now see Linux does do overcommit. =) I vaguely remember reading somewhere that Solaris was really the only OS to not support overcommit out of the box. The default setting of 0 for /proc/sys/vm/overcommit_memory in Linux prevents blatantly obvious overcommits like your example, but still allows smaller allocations to overcommit.

Voo said:
Not what I said. Some old applications use the pagefile in a rather strange way and you'll get a BSOD if you don't have one. That's the only thing that makes the system more reliable and I'm not sure how many people really run stuff like that (someone mentioned a game last time we had that discussion).

AFAIK there's no userland API for direct pagefile usage, so old applications can't use the pagefile, they just allocate and let the kernel decide where the backing store comes from.

Voo said:
Well I see the advantage in overcommitting especially if you have lots of VMs that are often idle and need not much memory usually, since you get the possibility to allocate Xgb memory to each VM without having to back that up with that much real memory, that would probably go to waste most of the time.(gambling with probabilities if you will)

Yea and ESX lets you do just that. You can tell a VM it's got 10G of memory but have ESX allocate 6G of that from it's swap and only 4G from memory. With a Linux host I don't think there's anyway to control where the memory comes from like that. Although it still seems like a bad idea for production hosts.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
And as you now see Linux does do overcommit. =) I vaguely remember reading somewhere that Solaris was really the only OS to not support overcommit out of the box. The default setting of 0 for /proc/sys/vm/overcommit_memory in Linux prevents blatantly obvious overcommits like your example, but still allows smaller allocations to overcommit.
I actually stated in my post that Linux lets you overcommit but neither Windows nor Unix (Solaris for sure, have used that often enough) ;)
And till 2.5 or something like that, that flag didn't exist, not sure when they added it exactly.

AFAIK there's no userland API for direct pagefile usage, so old applications can't use the pagefile, they just allocate and let the kernel decide where the backing store comes from.
Don't ask me what exactly they're doing, but there are old applications that will BSOD without a page file, unimportant how much unused memory you've got lying around and even a 256mb page file will stop them from doing that. So not sure what they're doing, but there's a hack for everything I assume. They probably thought they were especially clever~


Yea and ESX lets you do just that. You can tell a VM it's got 10G of memory but have ESX allocate 6G of that from it's swap and only 4G from memory. With a Linux host I don't think there's anyway to control where the memory comes from like that. Although it still seems like a bad idea for production hosts.
Yeah the better Hypervisors support overcommitting for such scenarios where it's maybe useful (I'm rather sure MS doesn't but you can probably argue if they're in the "better hypervisor" group anyways ;) )
Not the VM guy, but it seems like a feature many people want and it seems kinda useful for lots of small mostly idle VMs.. though I've got to agree I wouldn't want the OOM Killer starting to kill processes on a production server.