The code below will spawn as many children as possible. Themselves won't fork further, and will become zombies once the parent process exits.
How many children processes will the parent process spawn ?
int main(int argc, char *arg[])
{
while(fork() > 0);
}
The number of child processes can be limited with setrlimit(2) using RLIMIT_NPROC
. Notice that fork(2) can fail for several reasons. You could use bash
builtin ulimit
to set that limit.
You can use getrlimit
(or parse /proc/self/limits
, see proc(5)) to get that information.
System-wide, you might use /proc/sys/kernel/threads-max
since:
This file specifies the system-wide limit on the number of threads
(tasks) that can be created on the system.
There is also /proc/sys/kernel/pid_max
This file specifies the value at which PIDs wrap around (i.e., the
value in this file is one greater than the maximum PID). PIDs
greater than this value are not allocated; thus, the value in this
file also acts as a system-wide limit on the total number of
processes and threads. The default value for this file, 32768,
results in the same range of PIDs as on earlier kernels. On 32-bit
platforms, 32768 is the maximum value for pid_max. On 64-bit
systems, pid_max can be set to any value up to 2^22 (PID_MAX_LIMIT,
approximately 4 million).
However, there could be other limitations (notably swap space).
A task for the kernel is either a single-threaded process or some thread inside some process - e.g. created by low-level syscall clone(2) (or some kernel thread like kworker
, ksoftirqd
etc...).
BTW, the practical number of processes is much more limited by available resources. A typical Linux desktop has only a few hundreds of them (right now, my Debian/x86-64 desktop with 32Gb RAM & i5-4690S has 227 processes). So a process is a quite expensive resource (it needs RAM, it needs CPU...). If you have too many of them you'll experience thrashing. And in practice, you don't want to have too many runnable processes or schedulable tasks (probably only a few dozens of them at most, perhaps no more than a few per core).
Update -- I was perhaps to fast, didn't see that there is no fork loop. Then it's probably depending on how expensive it is to fork on that machine. The zombies may also use system resources which will at some point be exhausted. And the ulimit command mentioned below is of course still valid.--
Update 2: I see this in some copy of /linux/kernel/fork.c which should keep a machine usable (max_threads apparently limits the number of processes as well since each process has at least one thread):
/*
272 * The default maximum number of threads is set to a safe
273 * value: the thread structures can take up at most half
274 * of memory.
275 */
276 max_threads = mempages / (8 * THREAD_SIZE / PAGE_SIZE);
--
Original answer:
It will create as many processes as are physically possible (that is, it will quickly freeze the machine -- I have done that), or as many as are allowed for the current user or shell, if such a limit is imposed. In bash one can impose a limit through the built-in shell command ulimit -u <number>
. Note that a process probably doesn't have to be started through bash (perhaps a cron job won't).