-
Type: Bug
-
Resolution: Done
-
Priority: Major - P3
-
Affects Version/s: 2.7.8
-
Component/s: Concurrency
-
Fully Compatible
-
ALL
Reducing contention on GroupCommit mutex (SERVER-15950) exposed another bottleneck. The attached Vtune lock profiler analysis shows that the second most evident hot lock is the one that protect the lockmgr bucket.
Callstack:
Waiting Call Stack 1 of 10: 99.4% (549.400s of 552.974s) mongod!lock+0xa - mutex.h:164 mongod!scoped_lock+0 - mutex.h:170 mongod!mongo::LockManager::lock+0x47 - lock_mgr_new.cpp:405 mongod!mongo::LockerImpl<(bool)1>::lockImpl+0x264 - lock_state.cpp:708 mongod!mongo::LockerImpl<(bool)1>::lock+0x27 - lock_state.cpp:422 mongod!mongo::LockerImpl<(bool)1>::endWriteUnitOfWork+0xfc - lock_state.cpp:407 mongod!mongo::WriteUnitOfWork::commit+0x3b - operation_context.h:176 mongod!singleInsert+0x1bf - batch_executor.cpp:1110 mongod!insertOne+0x1a0 - batch_executor.cpp:1038 mongod!mongo::WriteBatchExecutor::execOneInsert+0x5f - batch_executor.cpp:1063 mongod!mongo::WriteBatchExecutor::execInserts+0x22c - batch_executor.cpp:860 mongod!mongo::WriteBatchExecutor::bulkExecute+0x33 - batch_executor.cpp:754 mongod!mongo::WriteBatchExecutor::executeBatch+0x3a4 - batch_executor.cpp:263 mongod!mongo::WriteCmd::run+0x168 - write_commands.cpp:143 mongod!mongo::_execCommand+0x33 - dbcommands.cpp:1160 mongod!mongo::Command::execCommand+0xc50 - dbcommands.cpp:1374 mongod!mongo::_runCommands+0x1ef - dbcommands.cpp:1450 mongod!runCommands+0x23 - new_find.cpp:131 mongod!mongo::newRunQuery+0xff9 - new_find.cpp:551 mongod!receivedQuery+0x1f2 - instance.cpp:220 mongod!mongo::assembleResponse+0x9d0 - instance.cpp:393 mongod!mongo::MyMessageHandler::process+0xdf - db.cpp:185 mongod!mongo::PortMessageServer::handleIncomingMsg+0x420 - message_server_port.cpp:234 libpthread-2.18.so!start_thread+0xc2 - [Unknown]:[Unknown] libc-2.18.so!__clone+0x6c - [Unknown]:[Unknown]
and:
Waiting Call Stack 1 of 8: 99.9% (486.046s of 486.639s) mongod!lock+0xa - mutex.h:164 mongod!scoped_lock+0 - mutex.h:170 mongod!mongo::LockManager::unlock+0x3b - lock_mgr_new.cpp:521 mongod!mongo::LockerImpl<(bool)1>::_unlockImpl+0xb0 - lock_state.cpp:718 mongod!mongo::LockerImpl<(bool)1>::unlock+0x1c3 - lock_state.cpp:516 mongod!mongo::LockerImpl<(bool)1>::endWriteUnitOfWork+0xcc - lock_state.cpp:403 mongod!mongo::WriteUnitOfWork::commit+0x3b - operation_context.h:176 mongod!singleInsert+0x1bf - batch_executor.cpp:1110 mongod!insertOne+0x1a0 - batch_executor.cpp:1038 mongod!mongo::WriteBatchExecutor::execOneInsert+0x5f - batch_executor.cpp:1063 mongod!mongo::WriteBatchExecutor::execInserts+0x22c - batch_executor.cpp:860 mongod!mongo::WriteBatchExecutor::bulkExecute+0x33 - batch_executor.cpp:754 mongod!mongo::WriteBatchExecutor::executeBatch+0x3a4 - batch_executor.cpp:263 mongod!mongo::WriteCmd::run+0x168 - write_commands.cpp:143 mongod!mongo::_execCommand+0x33 - dbcommands.cpp:1160 mongod!mongo::Command::execCommand+0xc50 - dbcommands.cpp:1374 mongod!mongo::_runCommands+0x1ef - dbcommands.cpp:1450 mongod!runCommands+0x23 - new_find.cpp:131 mongod!mongo::newRunQuery+0xff9 - new_find.cpp:551 mongod!receivedQuery+0x1f2 - instance.cpp:220 mongod!mongo::assembleResponse+0x9d0 - instance.cpp:393 mongod!mongo::MyMessageHandler::process+0xdf - db.cpp:185 mongod!mongo::PortMessageServer::handleIncomingMsg+0x420 - message_server_port.cpp:234 libpthread-2.18.so!start_thread+0xc2 - [Unknown]:[Unknown] libc-2.18.so!__clone+0x6c - [Unknown]:[Unknown]
kaloian.manassiev proposes to partition Global/Flush/DB locks to reduce this contention.
- related to
-
SERVER-16065 Long flush pauses in MMAPv1
- Closed