OPEN_MAX
is the constant that defines the maximum number of open files allowed for a single program.
According to Beginning Linux Programming 4th Edition, Page 101 :
The limit, usually defined by the constant OPEN_MAX in limits.h, varies from system to system, ...
In my system, the file limits.h
in directory /usr/lib/gcc/x86_64-linux-gnu/4.6/include-fixed
does not have this constant. Am i looking at the wrong limits.h
or has the location of OPEN_MAX
changed since 2008 ?
For what it's worth, the 4th edition of Beginning Linux Programming was published in 2007; parts of it may be a bit out of date. (That's not a criticism of the book, which I haven't read.)
It appears that
OPEN_MAX
is deprecated, at least on Linux systems. The reason appears to be that the maximum number of file that can be opened simultaneously is not fixed, so a macro that expands to an integer literal is not a good way to get that information.There's another macro
FOPEN_MAX
that should be similar; I can't think of a reason whyOPEN_MAX
andFOPEN_MAX
, if they're both defined, should have different values. ButFOPEN_MAX
is mandated by the C language standard, so system's don't have the option of not defining it. The C standard says thatFOPEN_MAX
(If the word "minimum" is confusing, it's a guarantee that a program can open at least that many files at once.)
If you want the current maximum number of files that can be opened, take a look at the
sysconf()
function; on my system,sysconf(_SC_OPEN_MAX)
returns 1024. (Thesysconf()
man page refers to a symbolOPEN_MAX
. This is not a count, but a value recognized bysysconf()
. And it's not defined on my system.)I've searched for
OPEN_MAX
(word match, so excludingFOPEN_MAX
) on my Ubuntu system, and found the following (these are obviously just brief excerpts):/usr/include/X11/Xos.h
:/usr/include/i386-linux-gnu/bits/local_lim.h
:/usr/include/i386-linux-gnu/bits/xopen_lim.h
:I suggest to use the magic of
grep
to find this constant on/usr/include
:Hope it helps you
Aside from the link given by cste, I would like to point out that there is a
/proc/sys/fs/file-max
entry that provides the number of files THE SYSTEM can have open at any given time.Here's some docs: https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Directory_Server/8.2/html/Performance_Tuning_Guide/system-tuning.html
Note that this is not to say that there's a GUARANTEE you can open that many files - if the system runs out of some resource (e.g. "no more memory available"), then it may well fail.
The FOPEN_MAX indicates that the C library allows this many files to be opened (at least, as discussed), but there are other limits that may happen first. Say for example the SYSTEM limit is 4000 files, and some applications already running has 3990 files open. Then you won't be able to open more than 7 files [since stdin, stdout and stderr take up three slots too]. And if
rlimit
is set to 5, then you can only open 2 files of your own.In my opinion, the best way to know if you can open a file is to open it. If that fails, you have to do something else. If you have some process that needs to open MANY files [e.g. a multithreaded search/compare on a machine with 256 cores and 8 threads per core and each thread uses three files (file "A", "B" and "diff") ], then you may need to ensure that your FOPEN_MAX allows for 3 * 8 * 256 files being opened before you start creating threads, as a thread that fails to open its files will be meaningless. But for most ordinary applications, just try to open the file, if it fails, tell the user (log, or something), and/or try again...