memory allocation / fragmentation

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
Are there workloads / situations where the fact that malloc is not allowed to move around already-allocated blocks is a problem? What kind of performance hit would you get for having one extra level of indirection in all pointers?
 

rjain

Golden Member
May 1, 2003
1,475
0
0
It all depends on the allocator strategy. You definitely get a time penalty for strategies which fragment less because of the searching needed to find a free block of the right size and location. The MacOS (before X) used handles, which are what you describe in your second question. There was a call that would compact the heap and adjust the handles to point to the new locations. Really, a good generational GC is just easier. :)
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
I can definitely think of workloads where a too-simplistic strategy can get you in trouble, but they're pretty seriously contrived. :)

Let's say your memory is currently all free. Your allocation strategy is to search through memory for the first chunk large enough to hold the size of your malloc() and to allocate that piece.

If you did this:

while( memory available )
{
malloc(8);
malloc(16);
}

And then freed all the 8-byte allocations, you'd be stuck, since you'd have no chunks of memory larger than 8 bytes, even though 1/3 of your memory was unused. But this would be an incredibly stupid thing to do, and they probably do somewhat more sophisticated allocation than that. :)

Java, I think, also does something along those lines, but I'm not 100% sure.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
Originally posted by: rjain
The MacOS (before X) used handles, which are what you describe in your second question.
cool
There was a call that would compact the heap and adjust the handles to point to the new locations. Really, a good generational GC is just easier. :)
it seems to me that without heap compaction, if your computer has been running for an extended period of time, many of your processes will have their VM usage at whatever the peak possible is... so while with paging, it may not be a huge problem, it means you would have a bigger swapfile than you would need otherwise. right?
 

rjain

Golden Member
May 1, 2003
1,475
0
0
Originally posted by: CTho9305

it seems to me that without heap compaction, if your computer has been running for an extended period of time, many of your processes will have their VM usage at whatever the peak possible is... so while with paging, it may not be a huge problem, it means you would have a bigger swapfile than you would need otherwise. right?
Yeah, there are issues, and I think BSD and Linux 2.4 can let the app/libc use madvise() to indicate that a block of memory below brk is unused and can be unmapped. Typically, it's not that huge of a problem except with very specialized allocation and deallocation patterns when using manually managed storage. Even GC'd systems don't bother unmapping after copying or compacting, as the space is typically going to be used up to trigger the next GC anyway.
 

glugglug

Diamond Member
Jun 9, 2002
5,340
1
81
The Mac NEEDED memory blocks to be movable because until System 7.1 (and optionally until System 7.5 if you turned off 32-bit addressing), it was designed for CPUs which had no MMU, hence no application virtual memory -- ALL memory was shared. Since it was cooperative multitasking (at least for non thread-manager apps), applications could control when they gave up the CPU and the OS could be guaranteed that any block flagged as movable was safe to move every time an app gives up the CPU.

Rather than making a thread-safe memory architecture when they introduced the Thread Manager, multithreaded applications on the Mac are not able to allocate/deallocate memory except in the main (cooperative multitasking) application thread.

In modern OSes where each process has it's own virtual memory space this is not nearly as big an issue, and for the few applications that it is, those applications can request larger blocks from the OS and perform their own heap compaction.
 

rjain

Golden Member
May 1, 2003
1,475
0
0
Apps still had a single, contiguous block of memory. I don't see what having a shared memory space has to do with the issue at hand, as we can just treat the memory not part of the current app's space as being invalid.