mongodb - WiredTiger - “too many open files error” during resync of a secondary replica set member -
i'm upgrading secondary replica set member wiredtiger. have upgraded mongodb 2.6.3 3.0.4 , changed storage engine wiredtiger. resyncing data primary. @ point following error received, , process starts on again:
2015-07-22t13:18:55.658+0000 index [rssync] building index using bulk method
2015-07-22t13:18:55.664+0000 index [rssync] build index done. scanned 1591 total records. 0 secs
2015-07-22t13:18:56.397+0000 e storage [rssync] wiredtiger (24) [1437571136:397083][20413:0x7f3d9ed29700], file:wiredtiger.wt, session.create: wiredtiger.turtle: fopen: many open files
2015-07-22t13:18:56.463+0000 e repl [rssync] 8 24: many open files
2015-07-22t13:18:56.463+0000 e repl [rssync] initial sync attempt failed, 9 attempts remaining
the same machine running 2.6.3 version without open file limits issues. i'm aware wiredtiger might creating more files, must it, keep them open simultaneously ?
for reference:
cat /proc/sys/fs/file-max
10747371
in /etc/init.d/mongod configuration is:
ulimit -n 64000
according documentation seems mongo holds file descriptor every data file. in wiredtiger results in file each collection + file each index, according calculation our usecase, can add on 700k.
so can change ulimit 700000 or higher, i'm wondering whether correct solution, , alternatives exist.
wiredtiger clean open file descriptors based on how long have been idle, during heavy activity across large number of collections , indexes going end bounded ulimit on open files.
so, yes, need increase limit until no longer hit problem, stick mmapv1, or consolidate collections. recommend filing feature request outlining use case sample numbers way prevent type of pattern (more 1 collection per file example).
Comments
Post a Comment