Today I was helping a friend of mine with some C code, and I've found some strange behavior that I couldn't explain him why it was happening. We had TSV file with a list of integers, with an int each line. The first line was the number of lines the list had.
We also had a c file with a very simple "readfile". The first line was read to n, the number of lines, then there was an initialization of:
int list[n]
and finally a for loop of n with a fscanf.
For small n's (till ~100.000), everything was fine. However, we've found that when n was big (10^6), a segfault would occur.
Finally, we changed the list initialization to
int *list = malloc(n*sizeof(int))
and everything when well, even with very large n.
Can someone explain why this occurred? what was causing the segfault with int list[n], that was stopped when we start using list = malloc(n*sizeof(int))?
If you are on linux, you can set ulimit -s to a larger value and this might work for stack allocation also. When you allocate memory on stack, that memory remains till the end of your function's execution. If you allocate memory on heap(using malloc), you can free the memory any time you want(even before the end of your function's execution).
Generally, heap should be used for large memory allocations.
Assuming you have a typical implementation in your implementation it's most likely that:
allocated list on your stack, where as:
allocated memory on your heap.
In the case of a stack there is typically a limit to how large these can grow (if they can grow at all). In the case of a heap there is still a limit, but that tends to be much largely and (broadly) constrained by your RAM+swap+address space which is typically at least an order of magnitude larger, if not more.
When you allocate using a
malloc
, memory is allocated from heap and not from stack, which is much more limited in size.