-
Type: Question
-
Resolution: Duplicate
-
Priority: Major - P3
-
None
-
Affects Version/s: None
-
Component/s: MapReduce
-
None
During the reduce phase of a map-reduce job I'm running, it seems the entire mongod process becomes locked. I believe the following output shows the global write lock being held (from db.currentOp()):
{ "opid" : 128630, "active" : true, "secs_running" : 626, "op" : "query", "ns" : "mr_universal_split_2.mr_collection0", "query" : { "$msg" : "query not recording (too large)" }, "client" : "10.11.60.172:48370", "desc" : "conn929", "threadId" : "0x7e73fee9c700", "connectionId" : 929, "locks" : { "^" : "W" }, "waitingForLock" : false, "msg" : "m/r: reduce post processing M/R Reduce Post Processing Progress: 3222/5517 58%", "progress" : { "done" : 3222, "total" : 5517 }, "numYields" : 9040, "lockStats" : { "timeLockedMicros" : { "R" : NumberLong(0), "W" : NumberLong(79265742), "r" : NumberLong(56685244), "w" : NumberLong(728735) }, "timeAcquiringMicros" : { "R" : NumberLong(0), "W" : NumberLong(44727900), "r" : NumberLong(449781385), "w" : NumberLong(164798959) } } }
Is there a way to prevent map-reduce from holding the global write lock during the reduce phase? I recently modified the MR job by passing nonAtomic: true to the output specification. This improves the situation, but it still causes other clients to hang for many seconds before returning results from queries on unrelated databases.
- duplicates
-
SERVER-13552 remove unnecessary global lock during "replace" out action
- Backlog