• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Is having a swap partition really necessary?

Nvidiaguy07

Platinum Member
Im goin to do a fresh install of windows and linux of my computer and i was wondering how much swap space everyone uses?

Normally ill do 4GB so it matches the size that is recognizable in ubuntu 32 bit.

Also on a side note how is ubuntu 64bit. I know the big problem before was that flash wouldnt work i guess? Is it fixed now?
 
They used to say double the amount of physical ram is a good idea.. but lately since my machines now have ludicrous ram, i usually make the swap the same size. (and it very rarely gets hit anyways.. could probably do without it most of the time.. but a few gigs space is cheap anyways).


I use the AMD64 version of debian, no real problems. there's a 64bit version of flash now (flashplugin-nonfree). I had a few issues at first, but whatever they were.. they've been resolved for a while cause I can't remember what they were. lol.
 
i use a 64MB swap partition im running mint 6 64bit, i herd that some programs need the swap space to be there even if they dont use it so i made a small one even though i have 4GB of ram, i have never seen my swap partition use more than 2MB of the space.
 
I'd use about 1G, 2G at max. Unless you're hardcore into debugging system issues and want ram dumps (is this even true anymore?). Drive space is cheap enough 1-2G won't even be noticed.
 
I have 2G of RAM running Ubuntu 9.04 and I usually put 1024 or 1536 on the partition. I probably don't need anywhere near that, but I'm not going to miss the 512MB of disk space honestly.

Yes some programs still require it to be there even if they don't use it.

-Kevin
 
Im goin to do a fresh install of windows and linux of my computer and i was wondering how much swap space everyone uses?

I usually keep it a little bit bigger than the amount of memory so that suspend to disk works.

Normally ill do 4GB so it matches the size that is recognizable in ubuntu 32 bit.

I don't know if Ubuntu ships a kernel with CONFIG_HIGHMEM64G enabled but if you want you can compile one yourself that'll see all of your memory. It's generally a better idea to just use a 64-bit kernel even on a 32-bit install but it's possible.
 
OP: You know your own usage patterns best. If you use a lot of memory-intensive applications concurrently, you need a swap -- maybe even a big swap.

In general, if you don't know how much swap you need, follow Nothinman's advice and allocate swap slightly larger than physical RAM to enable hibernation. As others have said, disk space is cheap, and it can come in handy if you accidentally overtax your system.
 
On a notebook you need swap for hibernate. It stores the ram image on the swap before it turns off the power. In that case I go double the ram. Otherwise I go exactly equal to the ram. If you do not hibernate you can also use a file as a swap partition instead of a physical partition. This will allow you to grow it as your needs increase.
 
Originally posted by: Nvidiaguy07
Im goin to do a fresh install of windows and linux of my computer and i was wondering how much swap space everyone uses?
I've run Linux with swap files (small and large) and without any swap file at all...

Truthfully, I can't tell any difference!

Then, again, I never use hibernate/sleep on my computers, so I don't think it makes any difference.

Put another way, from my understanding of the situation, if you like to 'suspend' your computer - you better have a swap file that matches the amount of RAM (or larger) that you have installed on your machine, or you're asking for trouble, e.g. it won't come back to life after a hibernation or sleep state.

Personally, I run a smallish swap file, but for the life of me, I don't know why... paranoia, I guess. 😉
 
Regarding Swap Size for Normal Swap-like purposes

The primary role of swap is for use as virtual memory for processes. That is, if you look at all the memory in use, everywhere on the system, by all processes, and sum it up, that total is always less than or equal to the sum of physical DRAM and swap size. Should any process request additional memory that would exhaust the available (DRAM+swap), then that request will fail, and in all likelihood, the application will crash.

This problem only arises when a lot of memory is in use. If all of your running applications together never use more memory than is available in DRAM, then a swap is not needed. However, if you don't want your machine to start behaving strangely when/if you ever do use more memory than you actually have in DRAM, you should have a sizable swap, because it is quite cheap to do so.

Regarding Sleep and Suspend, etc.
There seems to be some confusion here. Modern ACPI has several states for systems:
S0 - On, alive, working, and copacetic
S1 - CPU Halted, but on. Everything else on. Not really used all that much.
S2 - CPU Off, everything else on. Again, not all that useful.
S3 - Sleeping, Standby, or Suspended, depending on who you talk to. Everything except DRAM refresh is powered off. No disk required for this state.
S4 - Hibernating. DRAM contents are written to disk before entering S4. In Linux, the contents are written to swap. If there's not enough room in swap, the machine can't enter S4.
S5 - Off. There is actually still power to wake devices.
Bonus 'state': Mechanical off. Essentially unplugged.

So, in other words, you don't need a swap to suspend. You need a swap greater than or equal to your DRAM size in order to hibernate.

 
The primary role of swap is for use as virtual memory for processes.

Virtual memory and swap space are completely separate entities. Virtual memory is always in use for every process but swap is only used whenever the kernel decides to free up some physical memory. Virtual memory enables that to happen simply with paging but the two are still separate.

That is, if you look at all the memory in use, everywhere on the system, by all processes, and sum it up, that total is always less than or equal to the sum of physical DRAM and swap size. Should any process request additional memory that would exhaust the available (DRAM+swap), then that request will fail, and in all likelihood, the application will crash.

Depends on how the accounting is done and which numbers you add up. If you add up the virtual address space you'll almost always get a number much larger than the amount of physical memory in your system. Hell, right now the virtual space used by ephiphany-browser and Xorg on this system is ~2.3G and I've only got 2G in my system. And off the top of my head I'm not sure if shared memory is counted towards a process' RSS, if so then you'd still be counting code used by mulitple processes mulitple times.

And the actual request for the memory won't fail, by defaut Linux overcommits memory so requests almost always succeed unless you run out of virtual space and the failure won't actually happen until you try to use the memory if the system can't find enough for you.

S4 - Hibernating. DRAM contents are written to disk before entering S4. In Linux, the contents are written to swap. If there's not enough room in swap, the machine can't enter S4.

I'm not sure if the in-kernel implementation or uswsusp support swap files or dedicated hibernation files yet but I know that TuxOnIce does and I'm pretty sure it was on the list for uswsusp at some point.
 
Originally posted by: Nothinman
The primary role of swap is for use as virtual memory for processes.

Virtual memory and swap space are completely separate entities. Virtual memory is always in use for every process but swap is only used whenever the kernel decides to free up some physical memory. Virtual memory enables that to happen simply with paging but the two are still separate.
Correct. I did not intend my post to imply that swap is necessary for VM, merely a mechanism for.

That is, if you look at all the memory in use, everywhere on the system, by all processes, and sum it up, that total is always less than or equal to the sum of physical DRAM and swap size. Should any process request additional memory that would exhaust the available (DRAM+swap), then that request will fail, and in all likelihood, the application will crash.

Depends on how the accounting is done and which numbers you add up. If you add up the virtual address space you'll almost always get a number much larger than the amount of physical memory in your system. Hell, right now the virtual space used by ephiphany-browser and Xorg on this system is ~2.3G and I've only got 2G in my system. And off the top of my head I'm not sure if shared memory is counted towards a process' RSS, if so then you'd still be counting code used by mulitple processes mulitple times.
All initialized memory exists somewhere in the system (uninitialized too, most OS's use the zero-page) -- usually in DRAM or on disk, or on some other paging device. If that memory didn't exist somewhere, data would just disappear from some poor process's virtual memory space.

That doesn't mean that the virtual address spaces of all processes exist somewhere. That'd be huge (2^49 on Intel machines IIRC). But the entire VSS exists somewhere.

There are optimizations, like page sharing, that allow the sum of RSS to be smaller than the combined size of physical storage.

And the actual request for the memory won't fail, by defaut Linux overcommits memory so requests almost always succeed unless you run out of virtual space and the failure won't actually happen until you try to use the memory if the system can't find enough for you.
The malloc(), brk(), or mmap() may or may not fail, but the accesses of the malloc()'d, brk()'d, or mmap()'d memory will fail. If you're lucky, your kernel will also start killing processes to free up some memory. Seeing as we're not in the programming forum, I didn't feel it necessary to make the distinction.

I'm not sure if the in-kernel implementation or uswsusp support swap files or dedicated hibernation files yet but I know that TuxOnIce does and I'm pretty sure it was on the list for uswsusp at some point.

I haven't futzed with hibernation much lately. I wouldn't be surprised if Linux can now use a hiberfile, as Windows does.
 
All initialized memory exists somewhere in the system (uninitialized too, most OS's use the zero-page) -- usually in DRAM or on disk, or on some other paging device. If that memory didn't exist somewhere, data would just disappear from some poor process's virtual memory space.

Which isn't true on Linux, by default Linux overcommits very optimistically.

The malloc(), brk(), or mmap() may or may not fail, but the accesses of the malloc()'d, brk()'d, or mmap()'d memory will fail. If you're lucky, your kernel will also start killing processes to free up some memory. Seeing as we're not in the programming forum, I didn't feel it necessary to make the distinction.

Lucky is subjective, the OOM killer in the Linux kernel isn't always that smart. And the distinction is important because a call to malloc() that's never used will increase the virtual size of a process but it's resident size won't move until the memory is accessed and thus actually allocated.

I haven't futzed with hibernation much lately. I wouldn't be surprised if Linux can now use a hiberfile, as Windows does.

I know that uswsusp can most definitely use a file that's setup as Linux swap but I'm not sure if it can use a dedicated file like hiberfil.sys just yet. I know that TuxOnIce can but that's an external patch so most people won't go that route.
 
Originally posted by: Nothinman
All initialized memory exists somewhere in the system (uninitialized too, most OS's use the zero-page) -- usually in DRAM or on disk, or on some other paging device. If that memory didn't exist somewhere, data would just disappear from some poor process's virtual memory space.

Which isn't true on Linux, by default Linux overcommits very optimistically.
Of course its not true when memory is actually legitimately overcommitted. In that case, some poor process somewhere loses its memory. Its even possible that the memory was dead (though unlikely).

The malloc(), brk(), or mmap() may or may not fail, but the accesses of the malloc()'d, brk()'d, or mmap()'d memory will fail. If you're lucky, your kernel will also start killing processes to free up some memory. Seeing as we're not in the programming forum, I didn't feel it necessary to make the distinction.

Lucky is subjective, the OOM killer in the Linux kernel isn't always that smart.
Yes, I loathe the OOM killer. Woe to the OOM killer.
EDIT: I called it 'lucky' because the other option is to just let the machine stop, or let processes die as they hit COWs.

... And the distinction is important because a call to malloc() that's never used will increase the virtual size of a process but it's resident size won't move until the memory is accessed and thus actually allocated.

It is, of course, legal, when allocating fresh pages to a process, to point all the virtual mappings at PF0 with copy-on-write. That ends up save a heckuva lot of physical storage for unused memory. Simultaneously, it overcommits the virtual space, if there isn't enough physical RAM+swap out there to serve the full request. However, at allocation-time, its all according to Hoyle. In fact, it will continue to work great as long as the application doesn't write any of its new pages.

As Nothinman said, this practice ends up moving the point of failure from malloc()-time to write-time, which incidentally is harder to handle because, to the unlucky process, the failure looks more like a segfault than an allocation failure. On the other hand, lots of processes really do request more memory than they actually use, so the overcommit sometimes allows you to run programs that wouldn't run at all without overcommit. In my opinion, just about any time you're seeing that kind of memory pressure, its in the interests of the stability of the system to increase the size of swap.

Bonus Opinion: Overcommit in general is a good reason to prefer to use mmap() to manage memory. mmap() at least has the common courtesy to fail on the spot.

EDIT: I expanded my opinion of the OOM killer.
 
Yes, I loathe the OOM killer. Woe to the OOM killer.
EDIT: I called it 'lucky' because the other option is to just let the machine stop, or let processes die as they hit COWs.

Yea, there's no "good" option at that point but it would be nice if the OOM killer was smarter. Not that I know of a good way to determine what to kill =)
 
Wow, so 10GB is overkill then right? That's usually what I put lol. I actually got owned by running out of swap a while back. Had a buggy app which had a very slow leak. A month after it was live, whole server locked up, had to be rebooted. 🙁 I had JUST hit 1 year uptime too. 1 year uptime on a non raid box is the kind of risk I like to take. 😛
 
I was thinking about this thread today, and I have two more points to add to this dead horse:

Originally posted by: Nothinman
Yes, I loathe the OOM killer. Woe to the OOM killer.
EDIT: I called it 'lucky' because the other option is to just let the machine stop, or let processes die as they hit COWs.

Yea, there's no "good" option at that point but it would be nice if the OOM killer was smarter. Not that I know of a good way to determine what to kill =)

I suppose the best thing to do would be to kill the processes that is eating the most memory. However, I don't know the Linux source well enough to know if there is a straightforward way of doing that. Especially without allocating memory.

If I had a dime for every time the OOM killer killed ssh or some other friendly process when it should have killed idiot_user_process_7, I would have many dimes indeed.

Originally posted by: RedSquirrel
Wow, so 10GB is overkill then right? That's usually what I put lol. I actually got owned by running out of swap a while back. Had a buggy app which had a very slow leak. A month after it was live, whole server locked up, had to be rebooted. 🙁 I had JUST hit 1 year uptime too. 1 year uptime on a non raid box is the kind of risk I like to take. 😛

Actually, on a 32-bit app, it can't use more than 4 GB. So if you run a 32 bit OS, and have more than 4 GB of swap (or RAM+swap), you are safe from having a runaway leak kill the whole machine. The leak will still kill the process when all the virtual memory space is allocated, but the machine can survive that. So, imo, any development machine should have ~6GB of swap, if its 32 bit. (herein, I assume that memory leaks don't make it to deployment... probably unrealistic for some folks)

In 64-bit world, all bets are off, of course. I think the x86 PTs are up to 49-bit VAs now, maybe 58. Nobody is going to allocate 2^[49,58] anytime soon for swap.
 
A month after it was live, whole server locked up, had to be rebooted.

I doubt you really had to reboot, although it was probably faster than the alternatives.

I had JUST hit 1 year uptime too.

Yay for not ever patching your kernel?

1 year uptime on a non raid box is the kind of risk I like to take.

Color me not surprised...

I suppose the best thing to do would be to kill the processes that is eating the most memory. However, I don't know the Linux source well enough to know if there is a straightforward way of doing that. Especially without allocating memory.

Well if you look at mm/oom_kill.c in the kernel source tree it's pretty straight forward how it selects a process, there's one function call badness that calculates which process is the worst and that's the one that's killed. The comments explain the 8 or so steps involved.

If I had a dime for every time the OOM killer killed ssh or some other friendly process when it should have killed idiot_user_process_7, I would have many dimes indeed.

There is a check for processes with lots of children to catch forkbombs however I wouldn't think that sshds would have large enough RSSes to get selected.

So if you run a 32 bit OS, and have more than 4 GB of swap (or RAM+swap), you are safe from having a runaway leak kill the whole machine.

For a single process, yes. But all you need is a few instances of that bad app and you're done. And actually that single process can only touch 3G of VM if you're running a 32-bit kernel.

So, imo, any development machine should have ~6GB of swap, if its 32 bit. (herein, I assume that memory leaks don't make it to deployment... probably unrealistic for some folks)

But if you're doing a web app and mulitple requests take off going nuts, you're screwed. Same thing if your app is using a database or something which is running locally.
 
Originally posted by: Nothinman
...
Color me not surprised...
Well if you look at mm/oom_kill.c...
Thank you. I will look there the next time I'm angry about the issue.

... if you're doing a web app and mulitple requests take off going nuts, you're screwed. Same thing if your app is using a database or something which is running locally.

Indeed. That process is hosed. If its multiple processes, either they're all hosed or the system is hosed, or both. Calling fork() is a real POS if it propagates memory issues.

I would really hope a serious DBMS would be properly configured for busy periods. Those applications are seriously coded to not exhaust memory. But web apps... waaaaaayy less so.

My advice was somewhat geared to the specific case of development, where one process is leaking memory -- if it ended up being N processes, then you would need N times the backing store to deal with the issue, obviously.
 
Indeed. That process is hosed. If its multiple processes, either they're all hosed or the system is hosed, or both. Calling fork() is a real POS if it propagates memory issues.

Well fork() in Linux uses COW so very little memory is needed to create the new process but depending on what the problem in your code is it could easily be duplicated in each child.

I would really hope a serious DBMS would be properly configured for busy periods. Those applications are seriously coded to not exhaust memory. But web apps... waaaaaayy less so.

One would hope but I've seen some pretty bad DBAs. And I was speaking about a development machine where a lot less attention is paid to things like that.

My advice was somewhat geared to the specific case of development, where one process is leaking memory -- if it ended up being N processes, then you would need N times the backing store to deal with the issue, obviously.

Yea, but in both cases you're screwed and it's just a matter of how screwed.
 
Originally posted by: Nothinman
Well fork() in Linux uses COW so very little memory is needed to create the new process but depending on what the problem in your code is it could easily be duplicated in each child.
Agreed. It becomes a truly nasty problem when child processes leak memory at the same rate as parents, and possibly fork() subsequent children. A leak that grows linearly in time in one process can end up growing geometrically if processes are fork()ing.

One would hope but I've seen some pretty bad DBAs. And I was speaking about a development machine where a lot less attention is paid to things like that.
I have, too. In fact, I've been a pretty terrible DBA, but I was saved from my own incompetence by quality software on more than one occasion. I have a healthy respect for code quality, and resilience, among commercial DBs.

Yea, but in both cases you're screwed and it's just a matter of how screwed.
The death of one process is a lot better than the slow, meandering defeat of a node for most users. With a 64-bit kernel, you have to rely on the OOM killer killing the right process, however.
 
On my desktop, I have 3gb of ram and I do not create a swap partition for Ubuntu. I did this after noticing after months of usage, my memory usage (including the swap file) never exceeded 1gb even under heavy multitasking.
 
Back
Top