Coredump is getting truncated

2020-03-01 06:29发布

问题:

I am setting

ulimit -c unlimited. 

And in c++ program we are doing

struct rlimit corelimit;
  if (getrlimit(RLIMIT_CORE, &corelimit) != 0) {
    return -1;
  }
  corelimit.rlim_cur = RLIM_INFINITY;
  corelimit.rlim_max = RLIM_INFINITY;
  if (setrlimit(RLIMIT_CORE, &corelimit) != 0) {
    return -1;
  }

but whenever program is getting crashed the core dump generated by it is getting truncated.

BFD: Warning: /mnt/coredump/core.6685.1325912972 is truncated: expected core file size >= 1136525312, found: 638976.

What can be the issue ?

We are using Ubuntu 10.04.3 LTS

Linux ip-<ip> 2.6.32-318-ec2 #38-Ubuntu SMP Thu Sep 1 18:09:30 UTC 2011 x86_64 GNU/Linux

This is my /etc/security/limits.conf

# /etc/security/limits.conf
#
#Each line describes a limit for a user in the form:
#
#<domain>        <type>  <item>  <value>
#
#Where:
#<domain> can be:
#        - an user name
#        - a group name, with @group syntax
#        - the wildcard *, for default entry
#        - the wildcard %, can be also used with %group syntax,
#                 for maxlogin limit
#        - NOTE: group and wildcard limits are not applied to root.
#          To apply a limit to the root user, <domain> must be
#          the literal username root.
#
#<type> can have the two values:
#        - "soft" for enforcing the soft limits
#        - "hard" for enforcing hard limits
#
#<item> can be one of the following:
#        - core - limits the core file size (KB)
#        - data - max data size (KB)
#        - fsize - maximum filesize (KB)
#        - memlock - max locked-in-memory address space (KB)
#        - nofile - max number of open files
#        - rss - max resident set size (KB)
#        - stack - max stack size (KB)
#        - cpu - max CPU time (MIN)
#        - nproc - max number of processes
#        - as - address space limit (KB)
#        - maxlogins - max number of logins for this user
#        - maxsyslogins - max number of logins on the system
#        - priority - the priority to run user process with
#        - locks - max number of file locks the user can hold
#        - sigpending - max number of pending signals
#        - msgqueue - max memory used by POSIX message queues (bytes)
#        - nice - max nice priority allowed to raise to values: [-20, 19]
#        - rtprio - max realtime priority
#        - chroot - change root to directory (Debian-specific)
#
#<domain>      <type>  <item>         <value>
#

#*               soft    core            0
#root            hard    core            100000
#*               hard    rss             10000
#@student        hard    nproc           20
#@faculty        soft    nproc           20
#@faculty        hard    nproc           50
#ftp             hard    nproc           0
#    ftp             -       chroot          /ftp
#@student        -       maxlogins       4



#for all users
* hard nofile 16384
* soft nofile 9000

More Details

I am using gcc optimization flag

O3 

I am setting stack thread size to .5 mb.

回答1:

I remember there is a hard limit which can be set by the administrator, and a soft limit which is set by the user. If the soft limit is stronger than the hard limit, the hard limit value is taken. I'm not sure this is valid for any shell though, I only know it from bash.



回答2:

I had the same problem with core files getting truncated.

Further investigation showed that ulimit -f (aka file size, RLIMIT_FSIZE) also affects core files, so check that limit is also unlimited / suitably high. [I saw this on Linux kernel 3.2.0 / debian wheezy.]



回答3:

Hard limits and soft limits have some specifics to them that can be a little hairy: see this about using sysctl to make name the changes last.

There is a file you can edit that should make the limit sizes last, although there is probably a corresponding sysctl command that will do so...



回答4:

If you are using coredumpctl, a possible solution could be to edit /etc/systemd/coredump.conf and increase ProcessSizeMax and ExternalSizeMax:

[Coredump]
#Storage=external
#Compress=yes
ProcessSizeMax=20G
ExternalSizeMax=20G
#JournalSizeMax=767M
#MaxUse=
#KeepFree=


回答5:

Similar issue happened when I killed the program manually with kill -3. It happened simply because I did not wait enough time for core file to finish generating.

Make sure that the file stopped growing in size, and only then open it.



回答6:

This solution works when the automated bug reporting tool (abrt) is used.

After I tried everything that was already suggested (nothing helped), I found one more setting, which affects dump size, in the /etc/abrt/abrt.conf

MaxCrashReportsSize = 5000

and increased its value.

Then restarted abrt daemon: sudo service abrtd restart, re-ran the crashing application and got a full core dump file.