Originally posted by: Nothinman
All initialized memory exists somewhere in the system (uninitialized too, most OS's use the zero-page) -- usually in DRAM or on disk, or on some other paging device. If that memory didn't exist somewhere, data would just disappear from some poor process's virtual memory space.
Which isn't true on Linux, by default Linux overcommits very optimistically.
Of course its not true when memory is actually legitimately overcommitted. In that case, some poor process somewhere loses its memory. Its even possible that the memory was dead (though unlikely).
The malloc(), brk(), or mmap() may or may not fail, but the accesses of the malloc()'d, brk()'d, or mmap()'d memory will fail. If you're lucky, your kernel will also start killing processes to free up some memory. Seeing as we're not in the programming forum, I didn't feel it necessary to make the distinction.
Lucky is subjective, the OOM killer in the Linux kernel isn't always that smart.
Yes, I loathe the OOM killer. Woe to the OOM killer.
EDIT: I called it 'lucky' because the other option is to just let the machine stop, or let processes die as they hit COWs.
... And the distinction is important because a call to malloc() that's never used will increase the virtual size of a process but it's resident size won't move until the memory is accessed and thus actually allocated.
It is, of course, legal, when allocating fresh pages to a process, to point all the virtual mappings at PF0 with copy-on-write. That ends up save a heckuva lot of physical storage for unused memory. Simultaneously, it overcommits the virtual space, if there isn't enough physical RAM+swap out there to serve the full request. However, at allocation-time, its all according to Hoyle. In fact, it will continue to work great as long as the application doesn't write any of its new pages.
As Nothinman said, this practice ends up moving the point of failure from malloc()-time to write-time, which incidentally is harder to handle because, to the unlucky process, the failure looks more like a segfault than an allocation failure. On the other hand, lots of processes really do request more memory than they actually use, so the overcommit sometimes allows you to run programs that wouldn't run at all without overcommit. In my opinion, just about any time you're seeing that kind of memory pressure, its in the interests of the stability of the system to increase the size of swap.
Bonus Opinion: Overcommit in general is a good reason to prefer to use mmap() to manage memory. mmap() at least has the common courtesy to fail on the spot.
EDIT: I expanded my opinion of the OOM killer.