Too many open files while ensure index mongo

2020-02-11 03:05发布

I would like to create text index on mongo collection. I write

db.test1.ensureIndex({'text':'text'})

and then i saw in mongod process

Sun Jan  5 10:08:47.289 [conn1] build index library.test1 { _fts: "text", _ftsx: 1 }
Sun Jan  5 10:09:00.220 [conn1]         Index: (1/3) External Sort Progress: 200/980    20%
Sun Jan  5 10:09:13.603 [conn1]         Index: (1/3) External Sort Progress: 400/980    40%
Sun Jan  5 10:09:26.745 [conn1]         Index: (1/3) External Sort Progress: 600/980    61%
Sun Jan  5 10:09:37.809 [conn1]         Index: (1/3) External Sort Progress: 800/980    81%
Sun Jan  5 10:09:49.344 [conn1]      external sort used : 5547 files  in 62 secs
Sun Jan  5 10:09:49.346 [conn1] Assertion: 16392:FileIterator can't open file: data/_tmp/esort.1388912927.0//file.233errno:24 Too many open files

I work on MaxOSX 10.9.1. Please help.

3条回答
等我变得足够好
2楼-- · 2020-02-11 03:46

it may be related to this

try to check your system configuration issuing the following command in terminal

ulimit -a

查看更多
做自己的国王
3楼-- · 2020-02-11 03:48

I added a temporary ulimit -n 4096 before the restore command. also you can use mongorestore --numParallelCollections=1 ... and that seems to help. But still the connection pool seems to get exhausted.

查看更多
Juvenile、少年°
4楼-- · 2020-02-11 03:50

I've had the same problem (executing a different operation, but still, a "Too many open files" error), and as lese says, it seems to be down to the 'maxfiles' limit on the machine running mongod.

On a mac, it is better to check limits with:

sudo launchctl limit

This gives you:

<limit name> <soft limit> <hard limit>
    cpu         unlimited      unlimited      
    filesize    unlimited      unlimited      
    data        unlimited      unlimited      
    stack       8388608        67104768       
    core        0              unlimited      
    rss         unlimited      unlimited      
    memlock     unlimited      unlimited      
    maxproc     709            1064           
    maxfiles    1024           2048  

What I did to get around the problem was to temporarily set the limit higher (mine was originally something like soft: 256, hard: 1000 or something weird like that):

sudo launchctl limit maxfiles 1024 2048

Then re-run the query/indexing operation and see if it breaks. If not, and to keep the higher limits (they will reset when you log out of the shell session you've set them on), create an '/etc/launchd.conf' file with the following line:

limit maxfiles 1024 2048

(or add that line to your existing launchd.conf file, if you already have one).

This will set the maxfile via launchctl on every shell at login.

查看更多
登录 后发表回答