What does “ulimit -s unlimited” do?

2019-01-14 00:29发布

There are understandably many related questions on stack allocation

What and where are the stack and heap?

Why is there a limit on the stack size?

Size of stack and heap memory

However on various *nix machines I can issue the bash command

ulimit -s unlimited

or the csh command

set stacksize unlimited

How does this change how programs are executed? Are there any impacts on program or system performance (e.g., why wouldn't this be the default)?

In case more system details are relevant, I'm mostly concerned with programs compiled with GCC on Linux running on x86_64 hardware.

3条回答
2楼-- · 2019-01-14 00:56

Mea culpa, stack size can indeed be unlimited. _STK_LIM is the default, _STK_LIM_MAX is something that differs per architecture, as can be seen from include/asm-generic/resource.h:

/*
 * RLIMIT_STACK default maximum - some architectures override it:
 */
#ifndef _STK_LIM_MAX
# define _STK_LIM_MAX           RLIM_INFINITY
#endif

As can be seen from this example generic value is infinite, where RLIM_INFINITY is, again, in generic case defined as:

/*
 * SuS says limits have to be unsigned.
 * Which makes a ton more sense anyway.
 *
 * Some architectures override this (for compatibility reasons):
 */
#ifndef RLIM_INFINITY
# define RLIM_INFINITY          (~0UL)
#endif

So I guess the real answer is - stack size CAN be limited by some architecture, then unlimited stack trace will mean whatever _STK_LIM_MAX is defined to, and in case it's infinity - it is infinite. For details on what it means to set it to infinite and what implications it might have, refer to the other answer, it's way better than mine.

查看更多
Juvenile、少年°
3楼-- · 2019-01-14 01:01

"ulimit -s unlimited" lets the stack grow unlimited. This may prevent your program from crashing if you write programs by recursion, especially if your programs are not tail recursive (compilers can "optimize" those), and the depth of recursion is large.

The answer by @Maxwell Hansen almost contains the right answer to the question. However, it is buried deep in a multitude of false claims -- see the comments. Thus, I felt obligated to write this answer.

查看更多
叼着烟拽天下
4楼-- · 2019-01-14 01:08

This will be a little bit pedantic and some of it you probably know already, so bear with me. When you declare variables in programs the kernel allocates space for their data in the stack. You can tell the kernel to limit how much space in the stack (or the heap, for that matter) any given program can use, that way one program can't just take up the whole stack. If there was no limit on how much of the stack a program could use up, bugs that would normally cause a program to crash would instead crash the entire system. The kernel crashing a program that goes above allocated stack space is called a "stack overflow".

One of the most common bugs with the stack is excessive or infinite recursion. Since each new call to a function causes all of it's variables to be placed on the stack, non-tail optimized recursive programs can quickly deplete the stack space allocated to a process by the kernel. For example, this infinitely recursive function would result in a crash from the kernel once it exceeded allocated stack space:

int smash_the_stack(int number) {
    smash_the_stack(number + 1);

    return 0;
}

Running out of stack space is traditionally a very scary thing, as it can be used in something called "stack smashing", or stack buffer overflow. This occurs when a malicious user intentionally causes a stack overflow to change the stack pointer to execute arbitrary instructions of their own, instead of the instructions in your own code.

As far as performance, there should be no impact whatsoever. If you are hitting your stack limit via recursion raising the stack size is probably not the best solution, but otherwise it isn't something you should have to worry about. If a program absolutely must store massive amounts of data it can use the heap instead.

查看更多
登录 后发表回答