-
Type: Bug
-
Resolution: Duplicate
-
Priority: Critical - P2
-
None
-
Affects Version/s: 2.0.3
-
Component/s: Replication
-
Environment:Linux 3.0.18
-
Linux
We newly initiated a replica set, but the to-be secondary never gets out of "RECOVERING" state, as the mongod process is killed by oom-killer in the middle of resync (seemingly last step of resync- when building secondary indexes, to be precise) and start from scratch every time.
journal is turned on, vm.overcommit_memory is set to 1, as suggested before.
Right now, testing "echo -17 > /proc/`cat /var/run/mongodb.pid`/oom_adj" (and "swapoff -a"), but every trial takes hours.
The data size is 10x larger than the physical memory, it seems unlikely that simply doubling the RAM would fix the problem, as the heuristics of oom-killer is rather unpredictable.
I'd like to know what triggers this failure, and what I should keep in mind.
What should we do to get resync done?
- duplicates
-
SERVER-6414 use regular file io, not mmap for external sort
- Closed
- is related to
-
SERVER-6141 can't successfully replicate our shards anymore. replication isn't using memory efficiently and linux is invoking oom_killer to kill mongod. servers replicated earlier on same config (with smaller data sets) are still working fine...
- Closed