I have a process in Linux that's getting a segmentation fault. How can I tell it to generate a core dump when it fails?
相关问题
- Is shmid returned by shmget() unique across proces
- how to get running process information in java?
- JQ: Select when attribute value exists in a bash a
- Error building gcc 4.8.3 from source: libstdc++.so
- Why should we check WIFEXITED after wait in order
It's worth mentioning that if you have a systemd set up, then things are a little bit different. The set up typically would have the core files be piped, by means of
core_pattern
sysctl value, throughsystemd-coredump(8)
. The core file size rlimit would typically be configured as "unlimited" already.It is then possible to retrieve the core dumps using
coredumpctl(1)
.The storage of core dumps, etc. is configured by
coredump.conf(5)
. There are examples of how to get the core files in the coredumpctl man page, but in short, it would look like this:Find the core file:
Get the core file:
Maybe you could do it this way, this program is a demonstration of how to trap a segmentation fault and shells out to a debugger (this is the original code used under
AIX
) and prints the stack trace up to the point of a segmentation fault. You will need to change thesprintf
variable to usegdb
in the case of Linux.You may have to additionally add a parameter to get gdb to dump the core as shown here in this blog here.
There are more things that may influence the generation of a core dump. I encountered these:
/proc/sys/kernel/core_pattern
./proc/sys/fs/suid_dumpable
may prevent the core to be generated.There are more situations which may prevent the generation that are described in the man page - try
man core
.In order to activate the core dump do the following:
In
/etc/profile
comment the line:In
/etc/security/limits.conf
comment out the line:execute the cmd
limit coredumpsize unlimited
and check it with cmdlimit
:to check if the corefile gets written you can kill the relating process with cmd
kill -s SEGV <PID>
(should not be needed, just in case no core file gets written this can be used as a check):Once the corefile has been written make sure to deactivate the coredump settings again in the relating files (1./2./3.) !
What I did at the end was attach gdb to the process before it crashed, and then when it got the segfault I executed the
generate-core-file
command. That forced generation of a core dump.This depends on what shell you are using. If you are using bash, then the ulimit command controls several settings relating to program execution, such as whether you should dump core. If you type
then that will tell bash that its programs can dump cores of any size. You can specify a size such as 52M instead of unlimited if you want, but in practice this shouldn't be necessary since the size of core files will probably never be an issue for you.
In tcsh, you'd type