I have a malloc in C that is 26901^2*sizeof(double)
This got me thinking what the largest value can be here?
Also, would I have any problems defining a macro to access this 2D array?
#define DN(i,j) ((int)i * ny + (int)j)
Because this seems to not be working for me - or I am at least unsure it is. I can't figure out how to make totalview dive on a macro to tell me what A[DN(indx,jndx)] is actually looking at.
The largest memory block you can ask
malloc()
for is the largestsize_t
value - this isSIZE_MAX
from<limits.h>
. The largest amount you can sucessfully request is obviously dependent on the operating system and the configuration of the individual machine.Your macro is not safe. It performs the index calculation with an
int
variable, which is only required to have a range up to 32767. Any value higher than this can cause signed overflow, which results in undefined behaviour. You are probably best off doing the calculation as asize_t
, since that type must be able to hold any valid array index:(Although note that if you supply negative values for
i
orj
, you'll get an index far out of bounds).The malloc question is answered (depends on OS, which you don't specify), so about that define:
is not quite safe, for someone might do
DN(a+b,c)
which expands towhich is probably not what you wanted. So put a lot of parentheses in there:
to see what
DN(indx,jndx)
points to, justprintf("%d\n",DN(indx,jndx));
Observations
Assuming a typical allocator, such as the one glibc uses, there are some observations:
malloc
.malloc
calling through tommap
to acquire pages).Experiment
Here's a simple program to allocate the largest possible block (compile with
gcc largest_malloc_size.c -Wall -O2
:Running the above program (
./a.out
) on myLinux stanley 2.6.32-24-generic-pae #39-Ubuntu SMP Wed Jul 28 07:39:26 UTC 2010 i686 GNU/Linux
machine obtains this result:This is an allocation of exactly 2800MiB. Observing the relevant mapping from
/proc/[number]/maps
:Conclusion
It appears the heap has been expanded in the area between the program data and code, and the shared library mappings, which sit snug against the user/kernel memory space boundary (obviously 3G/1G on this system).
This result suggests that the maximum allocatable space using malloc is roughly equal to:
Notes
With respect to glibc and Linux implementations, the following manual snippets are of great interest:
malloc
mmap
Afterword
This test was done on a x86 kernel. I'd expect similar results from a x86_64 kernel, albeit with vastly larger memory regions returned. Other operating systems may vary in their placement of mappings, and the handling of large
malloc
s, so results could be quite considerably different.The size parameter in a call to malloc is of type size_t, which varies by implementation. See this question for more.
That depends on your malloc implementation!
According to Wikipedia, "Since the v2.3 release, the GNU C library (glibc) uses a modified ptmalloc2, which itself is based on dlmalloc v2.7.0." dlmalloc refers to Doug Lea's malloc implementation. The important thing to note in this implementation is that large mallocs are accomplished through the operating system's memory mapped file functionality, so these blocks can be quite large indeed without many problems of finding a contiguous block.
26'901^2 = 723'663'801. If your double is 8 bytes, then it is less than 8GB. I see totally no problem allocating that much of memory and my apps routinely allocate (on 64 bit systems) much more. (Biggest memory consumption I have ever seen was 420GB (on Solaris 10 numa system with 640GB RAM) with largest continuous block of ~24GB.)
Largest value is hard to identify since it is platform dependent: similar to the 32bit systems it depends on user-space / kernel-space split. As things stand at the moment, I think one would first come to the limit of the actual physical RAM - before reaching the limit of what libc can allocate. (And kernel doesn't care, it just expands virtual memory often without even considering whether there is sufficient RAM to pin it to.)