Ah and once again..
First of all there's no really a limited read cycle for flash, so it's only about writes, second no you won't run out of write cycles with desktop usage (servers are another topic) for more than 5 years.
For a more detailed discussion (including all those fun facts about 10k write cycles per cell, wear leveling and so on) just search around, we have that topic at least once a week it seems 😉
But the short answer is, no matter if you have a swap file on your ssd, don't disable indexing and whatnot (or all those other dubious suggestions how to "safe your ssd") you won't run out of write cycles.
You can carry this one step further if you have sufficient ram (4Gb) and ask what a swap file would do for you in any case? Consumers are so tied to the whole swap file theory without understanding the needs of their system and software.
sygyzy said:I keep hearing different things about swapfiles and SSD's. Some say to disable it completely. Others say to keep a very small amount (512MB?) on the SSD because some programs won't run if they can't find a swap. And others say to make it 1-2x your system memory (unless you have 8GB, in which case no swap) and store it on the SSD. Lastly, swapfile on a different physical (mechanical) hard drive.
sygyzy said:Well in this case the SSD is 40GB so not much larger.
Voo said:Since the standard Win7 size is rather large (do they still ues 1.5x RAM?) and in 99.9% of all cases wasted, actually "just let it be" is probably one of the worse possible things to do.
Thank you, but I think I do understand how that stuff works and I'd love to hear how exactly the page file helps as long as you don't run out of memory (or more exactly low with lots of dirty pages that can't be written back)Because lots of people give advice without understanding how virtual memory works. Anyone who recommends disabling it completely should be ignored from that point on. Disabling the pagefile just forces the Windows memory manager into a tougher situation because it has one less tool at it's disposal.
I wasn't complaining about the size of Windows. I was saying that my SSD OS drive is only 40GB and half of that is taken up by Win 7 and some Program Files. I don't have room to spare on that drive for a swap. I would prefer to put it on a different HD. I wanted to know if I should use one at all and if so, would putting it a mechanical drive be ok.
Golgatha said:After considering all the options, I decided to make my swap file a static 4GB one. Basically my reasoning was any old program archaic enough to actually use the swap file will be limited to 2GB of memory anyway and 4GB gives a bit of wiggle room above that limitation. I keep it static so that the space isn't being constantly resized.
Voo said:Thank you, but I think I do understand how that stuff works and I'd love to hear how exactly the page file helps as long as you don't run out of memory.
Because after all the pagefile lets you a) allocate more virtual memory and b) keep some memory free for new tasks while writing old modified data into it, not more, not less.. and obviously both things are only useful if you run short of memory.
Voo said:And I think the Win7 standard configuration is still 1.5x of the RAM untill you hit an upper limit, right? So for 4gb RAM that'd be 6gb diskspace or in other words 16.1% of the available space of the SSD.
Voo said:If you can't show a example where the pagefile brings an advantage in the case where you don't run low of memory, the pagefile is wasted as long as you've got enough memory. If you run some old applications who rely on the pagefile in either case then it may be a good idea to keep a small piece, but other than that.
Ahm no. That may work in Linux, but windows (and unix) doesn't let you overcommit memory.The pagefile doesn't affect the amount of virtual memory available at all. Every 32-bit process has 2G available and for 64-bit processes it's some odd TB number that I don't feel like looking up right now, but both have the same exact amount of VM available regardless of the amount of physical memory installed. Removing the safety net provided by a pagefile serves no purpose other than saving a little bit of disk space.
Ahem you missed the part with "dirty". Only a small subset of all pages will be modified and can't be stored, you don't need a pagefile to do that with unmodified pages.Having a pagefile helps even when you're not low on memory because it lets Windows store modified pages that have no other on-disk backing store to make room for more data. If you've got pages in memory that haven't been referenced in hours or even days why would you want to force them to stay in memory? With Vista and Win7 this should be even more apparent as SuperFetch will watch your usage patterns and evict and preload data dependent on your usage patterns.
Not what I said. Some old applications use the pagefile in a rather strange way and you'll get a BSOD if you don't have one. That's the only thing that makes the system more reliable and I'm not sure how many people really run stuff like that (someone mentioned a game last time we had that discussion).It's not really wasted and if you think that pagefile usage is only relegated to old applications then you really don't know how virtual memory actually works or how the pagefile fits into it.
Yeah, you can do that in 2.6 kernels, and it's not my favorite thing about Linux.Ahm no. That may work in Linux, but windows (and unix) doesn't let you overcommit memory.
Even with overcommitting enabled, linux may prevent you from doing a single large malloc like that. The maximum allocation permitted is (physical ram + swap) * the value in /proc/sys/vm/overcommit_ratio (in percent; default is 50.) You can (and probably should) disable overcommitting entirely if, for example, you're not using large amounts of swap. Nobody likes the OOMkiller randomly taking out processes.Just tested this small juwel of code in VS08 compiled as a 64bit exe (and a small loop afterwards to make sure the compiler doesn't optimize it away).
int *test = (int*) malloc(8* 1024*1024*1024);
if(!test) {
printf("could not allocate memory\n");
return 1;
}
I get a nice fine error message when trying to allocate 8gb memory, ups.
Oh yeah, mine neither, though it's arguably enormously handy for VMs , but on a normal desktop system, horrible, horrible, especially if the OOMkiller starts flipping coins and kills much more important processes.Yeah, you can do that in 2.6 kernels, and it's not my favorite thing about Linux.
Yeah I think they added that with 2.5 or something, right? Just didn't know that it was the default configuration - but for a desktop OS that's obviously a much more sensible configuration then the old one.Even with overcommitting enabled, linux may prevent you from doing a single large malloc like that. The maximum allocation permitted is (physical ram + swap) * the value in /proc/sys/vm/overcommit_ratio (in percent; default is 50.) You can (and probably should) disable overcommitting entirely if, for example, you're not using large amounts of swap. Nobody likes the OOMkiller randomly taking out processes.
Depends on what you're trying to achieve by virtualizing, I suppose. If you're trying to make the most out of a small amount of hardware, swap makes make sense as it's goal is similar. At my work, we have large clusters hosting redundant virtualized server farms and we don't use swap at all. (For that matter, we don't use local disk at all.) Occasionally we need to up the memory allocation of the VMs, but that's a relatively rare and pain-free process, and one that is caught long before the changes are rolled to production.Oh yeah, mine neither, though it's arguably enormously handy for VMs
No real argument with you; just wanted to note we're discussing two different types of overcommitting. You're referring to overcommitting the memory allocated to the VMs, whereas I was referring to a VM running Linux which overcommits to its processes just like any other host. It's unfortunate that the same term is used for two subtly different things.Well I see the advantage in overcommitting especially if you have lots of VMs that are often idle and need not much memory usually, since you get the possibility to allocate Xgb memory to each VM without having to back that up with that much real memory, that would probably go to waste most of the time.(gambling with probabilities if you will)
Voo said:Ahem you missed the part with "dirty". Only a small subset of all pages will be modified and can't be stored, you don't need a pagefile to do that with unmodified pages.
Voo said:Oh and if you'd stop to tell everyone who doesn't agree with you that they don't understand how virtual memory works, I think that would also help.. especially since you also aren't perfect *points upwards to the overcommiting example* (not that I blame you, that's something which is hardly important most of the time and you seem to be more of an linux guy)
Voo said:Not what I said. Some old applications use the pagefile in a rather strange way and you'll get a BSOD if you don't have one. That's the only thing that makes the system more reliable and I'm not sure how many people really run stuff like that (someone mentioned a game last time we had that discussion).
Voo said:Well I see the advantage in overcommitting especially if you have lots of VMs that are often idle and need not much memory usually, since you get the possibility to allocate Xgb memory to each VM without having to back that up with that much real memory, that would probably go to waste most of the time.(gambling with probabilities if you will)
I actually stated in my post that Linux lets you overcommit but neither Windows nor Unix (Solaris for sure, have used that often enough) 😉And as you now see Linux does do overcommit. =) I vaguely remember reading somewhere that Solaris was really the only OS to not support overcommit out of the box. The default setting of 0 for /proc/sys/vm/overcommit_memory in Linux prevents blatantly obvious overcommits like your example, but still allows smaller allocations to overcommit.
Don't ask me what exactly they're doing, but there are old applications that will BSOD without a page file, unimportant how much unused memory you've got lying around and even a 256mb page file will stop them from doing that. So not sure what they're doing, but there's a hack for everything I assume. They probably thought they were especially clever~AFAIK there's no userland API for direct pagefile usage, so old applications can't use the pagefile, they just allocate and let the kernel decide where the backing store comes from.
Yeah the better Hypervisors support overcommitting for such scenarios where it's maybe useful (I'm rather sure MS doesn't but you can probably argue if they're in the "better hypervisor" group anyways 😉 )Yea and ESX lets you do just that. You can tell a VM it's got 10G of memory but have ESX allocate 6G of that from it's swap and only 4G from memory. With a Linux host I don't think there's anyway to control where the memory comes from like that. Although it still seems like a bad idea for production hosts.