MongoDB: out of memory

2019-01-25 23:39发布

问题:

I am wondering about the MongoDB memory consumption. I have read the corresponding manual sections and the other questions on the topic, but I think this situation is different. May I ask you for your advice?

This is the error from the DB log file:

Fri Oct 26 20:34:00 [conn1] ERROR: mmap private failed with out of memory. (64 bit build) 
Fri Oct 26 20:34:00 [conn1] Assertion: 13636:file /docdata/mongodb/data/xxx_letters.5 open/create failed in createPrivateMap (look in log for more information)

These are the data files:

total 4.0G
drwxr-xr-x 2 mongodb mongodb 4.0K 2012-10-26 20:21 journal
-rw------- 1 mongodb mongodb  64M 2012-10-25 19:34 xxx_letters.0
-rw------- 1 mongodb mongodb 128M 2012-10-20 22:10 xxx_letters.1
-rw------- 1 mongodb mongodb 256M 2012-10-24 09:10 xxx_letters.2
-rw------- 1 mongodb mongodb 512M 2012-10-26 10:04 xxx_letters.3
-rw------- 1 mongodb mongodb 1.0G 2012-10-26 19:56 xxx_letters.4
-rw------- 1 mongodb mongodb 2.0G 2012-10-03 11:32 xxx_letters.5
-rw------- 1 mongodb mongodb  16M 2012-10-26 19:56 xxx_letters.ns

This is the output of free -tm:

             total       used       free     shared    buffers     cached
Mem:          3836       3804         31          0         65       2722
-/+ buffers/cache:       1016       2819
Swap:         4094        513       3581
Total:        7930       4317       3612

Is it really necessary to have enough system memory so that the largest data files fit in? Why grow the files that much? (From the sequence shown above, I expect the next file to be 4GB.) I'll try to extend the RAM, but data will eventually grow even more. Or maybe this is not a memory problem at all?

I have got a 64 bit Linux system and use the 64 bit MongoDB 2.0.7-rc1. There is plenty of disk space, the CPU load is 0.0. This is uname -a:

Linux xxx 2.6.32.54-0.3-default #1 SMP 2012-01-27 17:38:56 +0100 x86_64 x86_64 x86_64 GNU/Linux

回答1:

ulimit -a solved the mystery:

core file size (blocks, -c) 1
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 30619
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) 3338968
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 30619
virtual memory (kbytes, -v) 6496960
file locks (-x) unlimited

It worked after setting max memory size and virtual memory to unlimited and restarting everything. BTW, the next file had again 2GB.

Sorry for bothering you, but I was desperate. Maybe this helps somebody "googling" with a similar problem.



标签: mongodb