If for some reason, I discover a fatal situation in my program, and I would like to exit with an error code. Sometimes, the context of the fatal error is outside the scope of other file-descriptors. is it a good practice to close these file descriptors. As far as I know, these files are automatically closed when the process dies.
相关问题
- Multiple sockets for clients to connect to
- What is the best way to do a search in a large fil
- glDrawElements only draws half a quad
- Index of single bit in long integer (in C) [duplic
- Equivalent of std::pair in C
Don't rely on that. Conceptually, when the process dies, it is your responsibility to free allocated memory, close non-standard file descriptors, etc. Of course, every sane OS (and even Windows) will clean up after your process, but that's not something to expect.
Files are automatically closed, but it's a good practice.
See valgrind on this example
As you can see, it raises a memory leak
On some circumstances you can make use of
atexit()
:C does guarantee that all open files will be closed if your program terminates normally (i.e. via
exit
or a return frommain
). However, if your program terminates abnormally, e.g. it's closed by the operating system due to using a NULL pointer, it's up to the operating system to close the files. Therefore it's a good idea to make sure files are closed once they're no longer needed in case of unexpected termination.The other reason is resource limits. Most operating systems have limits on the number of files open (as well as many other things), and so it's good practice to return those resources as soon as they're no longer needed. If every program kept all its files open indefinitely, systems could run into problems quite quickly.
The classic guide to POSIX programming "Advanced programming in UNIX environment" states:
You did not mention the OS in your question but such behavior should be expected from any OS. Whenever your program control flow crosses
exit()
orreturn
frommain()
it is the system responsibility to clean up after the process.There is always a danger of bugs in OS implementation. But, on the other hand, system has to deallocate way more than several open file descriptors at the process termination: memory occupied by the executable file image, stack, kernel objects associated with the process. You can not control this behavior from the user space, you just rely on its working-as-intended. So, why can't a programmer rely on the automatic close of the
fd
s?So, the only problem with leaving
fd
s open may be the programming style question. And, as in the case of usingstdio
objects (i.e. stuff built around the system-provided file i/o), you may get (somewhat) disorienting alerts while valgrinding. As for the danger of leaking system resources, there should be nothing to worry about, unless your OS implementation is really buggy.Yes. Suppose your main program is now a class in a separate program. Now you just described a resource leak. You're essentially violating encapsulation by relying on global program state, i.e. the state of the process - not your module, not a class, not an ADT, not a thread, but the whole process - being in a shutdown state.
Every sane operating system (certainly any form of Linux, or Windows) will close the files when the program terminates. If you have a very simple program then you probably don't need to close files on termination. However closing the files explicitly is still good practice, for the following reasons:
if you leave it to the OS you have no control over the order in which the files are closed, which may lead to consistency problems (such as in a multi-file database).
if there are errors associated with closing the file (such as I/O errors, out of space errors, etc) you have no way of reporting them.
there may be interactions with file locking which need to be handled.
a routine to close all files can handle any other clean-up that the program needs at the same time (flushing buffers, for instance)