A fellow programmer claims that his 64 bit Mac program keeps running out of memory due to memory fragmentation. I countered that this is not possible, based on the knowledge that the program only allocates about 1-2 TB of memory in total and most allocations being in the range of 40-200 bytes, even if they're in the millions.
I believe it simply is not possible to fragment the 64 bit address space in such a way that an allocation request would fail because the memory allocator cannot find a free gap of the requested size in the address space.
My belief is based on the understanding that nearly the entire address space (64 bit) is available. Maybe it's only 62 bit.
So I wonder if there are other restrictions that could be causing his problems. E.g, is the entire 64 bit address space really available, or is there only a much smaller subset of the address space that can be used for allocations? Or is there a limit on the total number of virtual pages?
Apple's Kernel developer guide does not appear to provide any information on this topic.
I found some hints on limitations in How can a moderately sized memory allocation fail in a 64 bit process on Mac OS X? - but the suggestions such as that there's a total limit around 128 TB were not supported with references. E.g, does this limit refer to a total amount that can be allocated to any process, or is that a total for all processes together, and is that about the allocated total, or is that the address range that can be allocated?
I.e, if 128 TB is the total address range that can be returned by malloc, and if that's one range for all processes, then I could imagine this causing said problems. But is that really it?
The final answer can probably be found in the darwin source code. Has someone a grip on it and can summarize it here?