There are understandably many related questions on stack allocation
What and where are the stack and heap?
Why is there a limit on the stack size?
However on various *nix machines I can issue the bash command
ulimit -s unlimited
or the csh command
set stacksize unlimited
How does this change how programs are executed? Are there any impacts on program or system performance (e.g., why wouldn't this be the default)?
In case more system details are relevant, I'm mostly concerned with programs compiled with GCC on Linux running on x86_64 hardware.
Mea culpa, stack size can indeed be unlimited.
_STK_LIM
is the default,_STK_LIM_MAX
is something that differs per architecture, as can be seen frominclude/asm-generic/resource.h
:As can be seen from this example generic value is infinite, where
RLIM_INFINITY
is, again, in generic case defined as:So I guess the real answer is - stack size CAN be limited by some architecture, then unlimited stack trace will mean whatever
_STK_LIM_MAX
is defined to, and in case it's infinity - it is infinite. For details on what it means to set it to infinite and what implications it might have, refer to the other answer, it's way better than mine."ulimit -s unlimited" lets the stack grow unlimited. This may prevent your program from crashing if you write programs by recursion, especially if your programs are not tail recursive (compilers can "optimize" those), and the depth of recursion is large.
The answer by @Maxwell Hansen almost contains the right answer to the question. However, it is buried deep in a multitude of false claims -- see the comments. Thus, I felt obligated to write this answer.
This will be a little bit pedantic and some of it you probably know already, so bear with me. When you declare variables in programs the kernel allocates space for their data in the stack. You can tell the kernel to limit how much space in the stack (or the heap, for that matter) any given program can use, that way one program can't just take up the whole stack. If there was no limit on how much of the stack a program could use up, bugs that would normally cause a program to crash would instead crash the entire system. The kernel crashing a program that goes above allocated stack space is called a "stack overflow".
One of the most common bugs with the stack is excessive or infinite recursion. Since each new call to a function causes all of it's variables to be placed on the stack, non-tail optimized recursive programs can quickly deplete the stack space allocated to a process by the kernel. For example, this infinitely recursive function would result in a crash from the kernel once it exceeded allocated stack space:
Running out of stack space is traditionally a very scary thing, as it can be used in something called "stack smashing", or stack buffer overflow. This occurs when a malicious user intentionally causes a stack overflow to change the stack pointer to execute arbitrary instructions of their own, instead of the instructions in your own code.
As far as performance, there should be no impact whatsoever. If you are hitting your stack limit via recursion raising the stack size is probably not the best solution, but otherwise it isn't something you should have to worry about. If a program absolutely must store massive amounts of data it can use the heap instead.