2014-12-29T22:08:21.225+0000 I CONTROL ** WARNING: You are running this process as the root user, which is not recommended. 2014-12-29T22:08:21.225+0000 I CONTROL 2014-12-29T22:08:21.225+0000 I SHARDING [mongosMain] MongoS version 2.8.0-rc4 starting: pid=1 port=27017 64-bit host=mongo_S1 (--help for usage) 2014-12-29T22:08:21.225+0000 I CONTROL [mongosMain] db version v2.8.0-rc4 2014-12-29T22:08:21.225+0000 I CONTROL [mongosMain] git version: 3ad571742911f04b307f0071979425511c4f2570 2014-12-29T22:08:21.225+0000 I CONTROL [mongosMain] build info: Linux ip-10-45-3-116 3.13.0-36-generic #63-Ubuntu SMP Wed Sep 3 21:30:07 UTC 2014 x86_64 BOOST_LIB_VERSION=1_49 2014-12-29T22:08:21.225+0000 I CONTROL [mongosMain] allocator: tcmalloc 2014-12-29T22:08:21.226+0000 I CONTROL [mongosMain] options: { sharding: { configDB: "mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017" } } 2014-12-29T22:08:21.230+0000 I NETWORK [mongosMain] SyncClusterConnection connecting to [mongo_CFG1:27017] 2014-12-29T22:08:21.230+0000 I NETWORK [mongosMain] SyncClusterConnection connecting to [mongo_CFG2:27017] 2014-12-29T22:08:21.231+0000 I NETWORK [mongosMain] SyncClusterConnection connecting to [mongo_CFG3:27017] 2014-12-29T22:08:21.297+0000 I NETWORK [mongosMain] scoped connection to mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017 not being returned to the pool 2014-12-29T22:08:21.301+0000 I NETWORK [mongosMain] SyncClusterConnection connecting to [mongo_CFG1:27017] 2014-12-29T22:08:21.301+0000 I SHARDING [LockPinger] creating distributed lock ping thread for mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017 and process mongo_S1:27017:1419890901:1804289383 (sleeping for 30000ms) 2014-12-29T22:08:21.301+0000 I NETWORK [LockPinger] SyncClusterConnection connecting to [mongo_CFG1:27017] 2014-12-29T22:08:21.301+0000 I NETWORK [mongosMain] SyncClusterConnection connecting to [mongo_CFG2:27017] 2014-12-29T22:08:21.302+0000 I NETWORK [LockPinger] SyncClusterConnection connecting to [mongo_CFG2:27017] 2014-12-29T22:08:21.302+0000 I NETWORK [mongosMain] SyncClusterConnection connecting to [mongo_CFG3:27017] 2014-12-29T22:08:21.302+0000 I NETWORK [LockPinger] SyncClusterConnection connecting to [mongo_CFG3:27017] 2014-12-29T22:08:21.942+0000 I SHARDING [LockPinger] cluster mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017 pinged successfully at Mon Dec 29 22:08:21 2014 by distributed lock pinger 'mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017/mongo_S1:27017:1419890901:1804289383', sleeping for 30000ms 2014-12-29T22:08:22.012+0000 I SHARDING [mongosMain] distributed lock 'configUpgrade/mongo_S1:27017:1419890901:1804289383' acquired, ts : 54a1d0d528d48a01b1441edf 2014-12-29T22:08:22.013+0000 I SHARDING [mongosMain] starting upgrade of config server from v0 to v6 2014-12-29T22:08:22.013+0000 I SHARDING [mongosMain] starting next upgrade step from v0 to v6 2014-12-29T22:08:22.013+0000 I SHARDING [mongosMain] about to log new metadata event: { _id: "mongo_S1-2014-12-29T22:08:22-54a1d0d628d48a01b1441ee0", server: "mongo_S1", clientAddr: "N/A", time: new Date(1419890902013), what: "starting upgrade of config database", ns: "config.version", details: { from: 0, to: 6 } } 2014-12-29T22:08:22.244+0000 I SHARDING [mongosMain] writing initial config version at v6 2014-12-29T22:08:22.265+0000 I SHARDING [mongosMain] about to log new metadata event: { _id: "mongo_S1-2014-12-29T22:08:22-54a1d0d628d48a01b1441ee2", server: "mongo_S1", clientAddr: "N/A", time: new Date(1419890902265), what: "finished upgrade of config database", ns: "config.version", details: { from: 0, to: 6 } } 2014-12-29T22:08:22.291+0000 I SHARDING [mongosMain] upgrade of config server to v6 successful 2014-12-29T22:08:22.291+0000 I NETWORK [mongosMain] SyncClusterConnection connecting to [mongo_CFG1:27017] 2014-12-29T22:08:22.291+0000 I NETWORK [mongosMain] SyncClusterConnection connecting to [mongo_CFG2:27017] 2014-12-29T22:08:22.292+0000 I NETWORK [mongosMain] SyncClusterConnection connecting to [mongo_CFG3:27017] 2014-12-29T22:08:22.462+0000 I SHARDING [mongosMain] distributed lock 'configUpgrade/mongo_S1:27017:1419890901:1804289383' unlocked. 2014-12-29T22:08:22.636+0000 I SHARDING [Balancer] about to contact config servers and shards 2014-12-29T22:08:22.637+0000 I NETWORK [mongosMain] waiting for connections on port 27017 2014-12-29T22:08:22.637+0000 I SHARDING [Balancer] config servers and shards contacted successfully 2014-12-29T22:08:22.637+0000 I SHARDING [Balancer] balancer id: mongo_S1:27017 started at Dec 29 22:08:22 2014-12-29T22:08:23.085+0000 I SHARDING [Balancer] distributed lock 'balancer/mongo_S1:27017:1419890901:1804289383' acquired, ts : 54a1d0d628d48a01b1441ee4 2014-12-29T22:08:23.398+0000 I SHARDING [Balancer] distributed lock 'balancer/mongo_S1:27017:1419890901:1804289383' unlocked. 2014-12-29T22:08:23.400+0000 I NETWORK [mongosMain] connection accepted from 192.168.0.1:53586 #1 (1 connection now open) 2014-12-29T22:08:23.402+0000 I NETWORK [conn1] end connection 192.168.0.1:53586 (0 connections now open) 2014-12-29T22:08:33.447+0000 I NETWORK [mongosMain] connection accepted from 192.168.0.1:53593 #2 (1 connection now open) 2014-12-29T22:08:33.448+0000 I SHARDING [conn2] couldn't find database [admin] in config db 2014-12-29T22:08:33.457+0000 I SHARDING [conn2] put [admin] on: config:mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017 2014-12-29T22:08:33.458+0000 I NETWORK [conn2] starting new replica set monitor for replica set xxx with seeds mongo_D1:27017 2014-12-29T22:08:33.458+0000 I NETWORK [ReplicaSetMonitorWatcher] starting 2014-12-29T22:08:33.459+0000 I NETWORK [conn2] changing hosts to xxx/mongo_D1:27017,mongo_D2:27017 from xxx/mongo_D1:27017 2014-12-29T22:08:33.460+0000 I SHARDING [conn2] going to add shard: { _id: "xxx", host: "xxx/mongo_D1:27017,mongo_D2:27017" } 2014-12-29T22:08:33.484+0000 I SHARDING [conn2] about to log metadata event: { _id: "mongo_S1-2014-12-29T22:08:33-54a1d0e128d48a01b1441ee7", server: "mongo_S1", clientAddr: "N/A", time: new Date(1419890913484), what: "addShard", ns: "", details: { name: "xxx", host: "xxx/mongo_D1:27017" } } 2014-12-29T22:08:33.642+0000 I SHARDING [Balancer] distributed lock 'balancer/mongo_S1:27017:1419890901:1804289383' acquired, ts : 54a1d0e128d48a01b1441ee6 2014-12-29T22:08:33.642+0000 I NETWORK [Balancer] SyncClusterConnection connecting to [mongo_CFG1:27017] 2014-12-29T22:08:33.643+0000 I NETWORK [Balancer] SyncClusterConnection connecting to [mongo_CFG2:27017] 2014-12-29T22:08:33.643+0000 I NETWORK [Balancer] SyncClusterConnection connecting to [mongo_CFG3:27017] 2014-12-29T22:08:33.655+0000 I SHARDING [conn2] couldn't find database [test] in config db 2014-12-29T22:08:33.679+0000 I SHARDING [conn2] put [test] on: xxx:xxx/mongo_D1:27017,mongo_D2:27017 2014-12-29T22:08:33.679+0000 I COMMAND [conn2] enabling sharding on: test 2014-12-29T22:08:33.787+0000 I COMMAND [conn2] CMD: shardcollection: { shardCollection: "test.test", key: { _id: 1 } } 2014-12-29T22:08:33.787+0000 I SHARDING [conn2] enable sharding on: test.test with shard key: { _id: 1 } 2014-12-29T22:08:33.787+0000 I SHARDING [conn2] about to log metadata event: { _id: "mongo_S1-2014-12-29T22:08:33-54a1d0e128d48a01b1441ee9", server: "mongo_S1", clientAddr: "N/A", time: new Date(1419890913787), what: "shardCollection.start", ns: "test.test", details: { shardKey: { _id: 1 }, collection: "test.test", primary: "xxx:xxx/mongo_D1:27017,mongo_D2:27017", initShards: [], numChunks: 1 } } 2014-12-29T22:08:33.788+0000 I SHARDING [Balancer] distributed lock 'balancer/mongo_S1:27017:1419890901:1804289383' unlocked. 2014-12-29T22:08:33.798+0000 I SHARDING [conn2] going to create 1 chunk(s) for: test.test using new epoch 54a1d0e128d48a01b1441eea 2014-12-29T22:08:33.811+0000 I SHARDING [conn2] ChunkManager: time to load chunks for test.test: 0ms sequenceNumber: 2 version: 1|0||54a1d0e128d48a01b1441eea based on: (empty) 2014-12-29T22:08:33.840+0000 I SHARDING [conn2] about to log metadata event: { _id: "mongo_S1-2014-12-29T22:08:33-54a1d0e128d48a01b1441eeb", server: "mongo_S1", clientAddr: "N/A", time: new Date(1419890913840), what: "shardCollection", ns: "test.test", details: { version: "1|0||54a1d0e128d48a01b1441eea" } } 2014-12-29T22:08:33.861+0000 D SHARDING [conn2] Request::process end ns: admin.$cmd msg id: 906744567 op: 2004 attempt: 0 0ms 2014-12-29T22:08:38.863+0000 D SHARDING [conn2] Request::process begin ns: test.$cmd msg id: 432565108 op: 2004 attempt: 0 2014-12-29T22:08:38.863+0000 D SHARDING [conn2] command: test.$cmd { insert: "test", ordered: true, documents: [ { _id: ObjectId('54a1d0e66357ef0f03000e5a'), mongo_D1: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_D2: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_CFG1: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG2: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG3: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_S1: [ "alex/mongodb_1", "mongos --configdb mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017", 27017 ] } ] } ntoreturn: -1 options: 0 2014-12-29T22:08:38.863+0000 D SHARDING [conn2] starting execution of write batch of size 1 for test.test 2014-12-29T22:08:38.863+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:38.863+0000 D SHARDING [conn2] sending write batch to mongo_D1:27017: { insert: "test", documents: [ { _id: ObjectId('54a1d0e66357ef0f03000e5a'), mongo_D1: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_D2: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_CFG1: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG2: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG3: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_S1: [ "alex/mongodb_1", "mongos --configdb mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017", 27017 ] } ], ordered: true, metadata: { shardName: "xxx", shardVersion: [ Timestamp 1000|0, ObjectId('54a1d0e128d48a01b1441eea') ], session: 0 } } 2014-12-29T22:08:38.863+0000 D NETWORK [conn2] polling for status of connection to 192.168.0.95:27017, no events 2014-12-29T22:08:38.864+0000 D SHARDING [conn2] write results received from mongo_D1:27017: { ok: 1, n: 1, lastOp: Timestamp 1419890918000|1, electionId: ObjectId('54a1d0e0b12b60a304edb60b') } 2014-12-29T22:08:38.864+0000 D SHARDING [conn2] finished execution of write batch for test.test 2014-12-29T22:08:38.864+0000 D NETWORK [conn2] polling for status of connection to 192.168.0.97:27017, no events 2014-12-29T22:08:38.864+0000 D NETWORK [conn2] polling for status of connection to 192.168.0.98:27017, no events 2014-12-29T22:08:38.864+0000 D NETWORK [conn2] polling for status of connection to 192.168.0.99:27017, no events 2014-12-29T22:08:38.864+0000 I NETWORK [conn2] scoped connection to mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017 not being returned to the pool 2014-12-29T22:08:38.864+0000 D SHARDING [conn2] about to initiate autosplit: ns: test.test, shard: xxx:xxx/mongo_D1:27017,mongo_D2:27017, lastmod: 1|0||000000000000000000000000, min: { _id: MinKey }, max: { _id: MaxKey } dataWritten: 179426 splitThreshold: 921 2014-12-29T22:08:38.864+0000 D NETWORK [conn2] polling for status of connection to 192.168.0.95:27017, no events 2014-12-29T22:08:38.864+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:38.864+0000 D NETWORK [conn2] dbclient_rs findOne to primary node in xxx 2014-12-29T22:08:38.864+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:38.865+0000 D SHARDING [conn2] chunk not full enough to trigger auto-split 2014-12-29T22:08:38.865+0000 D SHARDING [conn2] Request::process end ns: test.$cmd msg id: 432565108 op: 2004 attempt: 0 1ms 2014-12-29T22:08:38.865+0000 D SHARDING [conn2] Request::process begin ns: admin.$cmd msg id: -1557959019 op: 2004 attempt: 0 2014-12-29T22:08:38.865+0000 D SHARDING [conn2] command: admin.$cmd { flushRouterConfig: 1 } ntoreturn: -1 options: 4 2014-12-29T22:08:38.865+0000 D SHARDING [conn2] Request::process end ns: admin.$cmd msg id: -1557959019 op: 2004 attempt: 0 0ms 2014-12-29T22:08:38.866+0000 D NETWORK [conn2] polling for status of connection to 192.168.0.97:27017, no events 2014-12-29T22:08:38.866+0000 D NETWORK [conn2] polling for status of connection to 192.168.0.98:27017, no events 2014-12-29T22:08:38.866+0000 D NETWORK [conn2] polling for status of connection to 192.168.0.99:27017, no events 2014-12-29T22:08:38.866+0000 D SHARDING [conn2] DBConfig unserialize: test { _id: "test", partitioned: true, primary: "xxx" } 2014-12-29T22:08:38.866+0000 D SHARDING [conn2] major version query from 0|0||54a1d0e128d48a01b1441eea and over 0 shards is { query: { ns: "test.test", lastmod: { $gte: Timestamp 0|0 } }, orderby: { lastmod: 1 } } 2014-12-29T22:08:38.866+0000 D NETWORK [conn2] polling for status of connection to 192.168.0.97:27017, no events 2014-12-29T22:08:38.866+0000 D NETWORK [conn2] polling for status of connection to 192.168.0.98:27017, no events 2014-12-29T22:08:38.866+0000 D NETWORK [conn2] polling for status of connection to 192.168.0.99:27017, no events 2014-12-29T22:08:38.867+0000 D SHARDING [conn2] found 1 new chunks for collection test.test (tracking 1), new version is 1|0||54a1d0e128d48a01b1441eea 2014-12-29T22:08:38.867+0000 D SHARDING [conn2] loaded 1 chunks into new chunk manager for test.test with version 1|0||54a1d0e128d48a01b1441eea 2014-12-29T22:08:38.867+0000 I SHARDING [conn2] ChunkManager: time to load chunks for test.test: 0ms sequenceNumber: 3 version: 1|0||54a1d0e128d48a01b1441eea based on: (empty) 2014-12-29T22:08:38.867+0000 D SHARDING [conn2] found 0 dropped collections and 1 sharded collections for database test 2014-12-29T22:08:38.867+0000 D SHARDING [conn2] Request::process begin ns: test.test msg id: 436300480 op: 2004 attempt: 0 2014-12-29T22:08:38.867+0000 D SHARDING [conn2] query: test.test { $query: {}, $readPreference: { mode: "primaryPreferred" } } ntoreturn: -1 options: 4 2014-12-29T22:08:38.867+0000 D NETWORK [conn2] creating pcursor over QSpec { ns: "test.test", n2skip: 0, n2return: -1, options: 4, query: { $query: {}, $readPreference: { mode: "primaryPreferred" } }, fields: {} } and CInfo { v_ns: "", filter: {} } 2014-12-29T22:08:38.867+0000 D QUERY [conn2] [QLOG] Beginning planning... ============================= Options = NO_TABLE_SCAN Canonical query: ns=test.test limit=0 skip=0 Tree: $and Sort: {} Proj: {} ============================= 2014-12-29T22:08:38.867+0000 D QUERY [conn2] [QLOG] Index 0 is kp: { _id: 1 } 2014-12-29T22:08:38.867+0000 D QUERY [conn2] [QLOG] Rated tree: $and 2014-12-29T22:08:38.867+0000 D QUERY [conn2] [QLOG] Planner: outputted 0 indexed solutions. 2014-12-29T22:08:38.867+0000 D NETWORK [conn2] initializing over 1 shards required by [test.test @ 1|0||54a1d0e128d48a01b1441eea] 2014-12-29T22:08:38.868+0000 D NETWORK [conn2] initializing on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } 2014-12-29T22:08:38.868+0000 D NETWORK [conn2] polling for status of connection to 192.168.0.95:27017, no events 2014-12-29T22:08:38.868+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:38.868+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:38.868+0000 D SHARDING [conn2] setting shard version of 1|0||54a1d0e128d48a01b1441eea for test.test on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017 2014-12-29T22:08:38.868+0000 D SHARDING [conn2] last version sent with chunk manager iteration 2, current chunk manager iteration is 3 2014-12-29T22:08:38.868+0000 D SHARDING [conn2] setShardVersion xxx mongo_D1:27017 test.test { setShardVersion: "test.test", configdb: "mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017", shard: "xxx", shardHost: "xxx/mongo_D1:27017,mongo_D2:27017", version: Timestamp 1000|0, versionEpoch: ObjectId('54a1d0e128d48a01b1441eea') } 3 2014-12-29T22:08:38.868+0000 D SHARDING [conn2] saveGLEStats lastOpTime:0:0 electionId:54a1d0e0b12b60a304edb60b 2014-12-29T22:08:38.868+0000 D SHARDING [conn2] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('54a1d0e128d48a01b1441eea'), ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('54a1d0e0b12b60a304edb60b') } } 2014-12-29T22:08:38.868+0000 D NETWORK [conn2] needed to set remote version on connection to value compatible with [test.test @ 1|0||54a1d0e128d48a01b1441eea] 2014-12-29T22:08:38.868+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:38.868+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:38.868+0000 D NETWORK [conn2] dbclient_rs say using secondary or tagged node selection in xxx, read pref is { pref: "primary pref", tags: [ {} ] } (primary : mongo_D1:27017, lastTagged : [not cached]) 2014-12-29T22:08:38.868+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:38.868+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:38.868+0000 D NETWORK [conn2] dbclient_rs selecting primary node mongo_D1:27017 2014-12-29T22:08:38.868+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:38.868+0000 D NETWORK [conn2] initialized query (lazily) on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: { conn: "xxx/mongo_D1:27017,mongo_D2:27017", vinfo: "test.test @ 1|0||54a1d0e128d48a01b1441eea", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } 2014-12-29T22:08:38.868+0000 D NETWORK [conn2] finishing over 1 shards 2014-12-29T22:08:38.868+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:38.868+0000 D NETWORK [conn2] finishing on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: { conn: "xxx/mongo_D1:27017,mongo_D2:27017", vinfo: "test.test @ 1|0||54a1d0e128d48a01b1441eea", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } 2014-12-29T22:08:38.868+0000 D NETWORK [conn2] finished on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: { conn: "(done)", vinfo: "test.test @ 1|0||54a1d0e128d48a01b1441eea", cursor: { _id: ObjectId('54a1d0e66357ef0f03000e5a'), mongo_D1: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_D2: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_CFG1: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG2: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG3: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_S1: [ "alex/mongodb_1", "mongos --configdb mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017", 27017 ] }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } 2014-12-29T22:08:38.868+0000 D SHARDING [conn2] Request::process end ns: test.test msg id: 436300480 op: 2004 attempt: 0 1ms 2014-12-29T22:08:43.458+0000 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: xxx 2014-12-29T22:08:43.458+0000 D NETWORK [ReplicaSetMonitorWatcher] creating new connection to:mongo_D2:27017 2014-12-29T22:08:43.459+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:08:43.459+0000 D NETWORK [ReplicaSetMonitorWatcher] connected to server mongo_D2:27017 (192.168.0.96) 2014-12-29T22:08:43.459+0000 D NETWORK [ReplicaSetMonitorWatcher] connected connection! 2014-12-29T22:08:43.459+0000 D SHARDING [ReplicaSetMonitorWatcher] checking wire version of new connection mongo_D2:27017 (192.168.0.96) 2014-12-29T22:08:43.460+0000 D NETWORK [ReplicaSetMonitorWatcher] Updating host mongo_D2:27017 based on ismaster reply: { setName: "xxx", setVersion: 1, ismaster: false, secondary: true, hosts: [ "mongo_D2:27017", "mongo_D1:27017" ], primary: "mongo_D1:27017", me: "mongo_D2:27017", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1419890923460), maxWireVersion: 2, minWireVersion: 0, ok: 1.0 } 2014-12-29T22:08:43.792+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.97:27017, no events 2014-12-29T22:08:43.792+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.98:27017, no events 2014-12-29T22:08:43.792+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.99:27017, no events 2014-12-29T22:08:43.792+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.97:27017, no events 2014-12-29T22:08:43.792+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.98:27017, no events 2014-12-29T22:08:43.792+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.99:27017, no events 2014-12-29T22:08:43.793+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.97:27017, no events 2014-12-29T22:08:43.793+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.98:27017, no events 2014-12-29T22:08:43.793+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.99:27017, no events 2014-12-29T22:08:43.793+0000 D SHARDING [Balancer] found 1 shards listed on config server(s): SyncClusterConnection [mongo_CFG1:27017 (192.168.0.97),mongo_CFG2:27017 (192.168.0.98),mongo_CFG3:27017 (192.168.0.99)] 2014-12-29T22:08:43.793+0000 D SHARDING [Balancer] Refreshing MaxChunkSize: 64MB 2014-12-29T22:08:43.793+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.95:27017, no events 2014-12-29T22:08:43.793+0000 D NETWORK [Balancer] ReplicaSetMonitor::get xxx 2014-12-29T22:08:43.793+0000 D NETWORK [Balancer] dbclient_rs findOne to primary node in xxx 2014-12-29T22:08:43.793+0000 D NETWORK [Balancer] ReplicaSetMonitor::get xxx 2014-12-29T22:08:43.794+0000 D SHARDING [Balancer] trying to acquire new distributed lock for balancer on mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017 ( lock timeout : 900000, ping interval : 30000, process : mongo_S1:27017:1419890901:1804289383 ) 2014-12-29T22:08:43.794+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.97:27017, no events 2014-12-29T22:08:43.794+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.98:27017, no events 2014-12-29T22:08:43.794+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.99:27017, no events 2014-12-29T22:08:43.794+0000 D SHARDING [Balancer] about to acquire distributed lock 'balancer/mongo_S1:27017:1419890901:1804289383' 2014-12-29T22:08:43.794+0000 D SHARDING [Balancer] trying to acquire lock { _id: "balancer", state: 0, ts: ObjectId('54a1d0e128d48a01b1441ee6') } with details { state: 1, who: "mongo_S1:27017:1419890901:1804289383:Balancer:1681692777", process: "mongo_S1:27017:1419890901:1804289383", when: new Date(1419890923794), why: "doing balance round", ts: ObjectId('54a1d0eb28d48a01b1441eec') } 2014-12-29T22:08:44.020+0000 I SHARDING [Balancer] distributed lock 'balancer/mongo_S1:27017:1419890901:1804289383' acquired, ts : 54a1d0eb28d48a01b1441eec 2014-12-29T22:08:44.020+0000 D SHARDING [Balancer] *** start balancing round. waitForDelete: 0, secondaryThrottle: default 2014-12-29T22:08:44.020+0000 D SHARDING [Balancer] can't balance without more active shards 2014-12-29T22:08:44.020+0000 D SHARDING [Balancer] no need to move any chunk 2014-12-29T22:08:44.020+0000 D SHARDING [Balancer] about to log balancer result: { server: "mongo_S1", time: new Date(1419890924020), what: "balancer.round", details: { executionTimeMillis: 228, errorOccured: false, candidateChunks: 0, chunksMoved: 0 } } 2014-12-29T22:08:44.036+0000 D SHARDING [Balancer] *** end of balancing round 2014-12-29T22:08:44.161+0000 I SHARDING [Balancer] distributed lock 'balancer/mongo_S1:27017:1419890901:1804289383' unlocked. 2014-12-29T22:08:44.875+0000 D SHARDING [conn2] Request::process begin ns: test.test msg id: 1512662069 op: 2004 attempt: 0 2014-12-29T22:08:44.875+0000 D SHARDING [conn2] query: test.test { $query: {}, $readPreference: { mode: "primaryPreferred" } } ntoreturn: -1 options: 4 2014-12-29T22:08:44.875+0000 D NETWORK [conn2] creating pcursor over QSpec { ns: "test.test", n2skip: 0, n2return: -1, options: 4, query: { $query: {}, $readPreference: { mode: "primaryPreferred" } }, fields: {} } and CInfo { v_ns: "", filter: {} } 2014-12-29T22:08:44.875+0000 D QUERY [conn2] [QLOG] Beginning planning... ============================= Options = NO_TABLE_SCAN Canonical query: ns=test.test limit=0 skip=0 Tree: $and Sort: {} Proj: {} ============================= 2014-12-29T22:08:44.875+0000 D QUERY [conn2] [QLOG] Index 0 is kp: { _id: 1 } 2014-12-29T22:08:44.875+0000 D QUERY [conn2] [QLOG] Rated tree: $and 2014-12-29T22:08:44.875+0000 D QUERY [conn2] [QLOG] Planner: outputted 0 indexed solutions. 2014-12-29T22:08:44.875+0000 D NETWORK [conn2] initializing over 1 shards required by [test.test @ 1|0||54a1d0e128d48a01b1441eea] 2014-12-29T22:08:44.875+0000 D NETWORK [conn2] initializing on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } 2014-12-29T22:08:44.876+0000 D NETWORK [conn2] polling for status of connection to 192.168.0.95:27017, no events 2014-12-29T22:08:44.876+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:44.876+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:44.876+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:44.876+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:44.876+0000 D NETWORK [conn2] dbclient_rs say using secondary or tagged node selection in xxx, read pref is { pref: "primary pref", tags: [ {} ] } (primary : mongo_D1:27017, lastTagged : [not cached]) 2014-12-29T22:08:44.876+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:44.876+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:44.876+0000 D NETWORK [conn2] dbclient_rs selecting primary node mongo_D1:27017 2014-12-29T22:08:44.876+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:44.876+0000 D NETWORK [conn2] initialized query (lazily) on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: { conn: "xxx/mongo_D1:27017,mongo_D2:27017", vinfo: "test.test @ 1|0||54a1d0e128d48a01b1441eea", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } 2014-12-29T22:08:44.876+0000 D NETWORK [conn2] finishing over 1 shards 2014-12-29T22:08:44.876+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:44.876+0000 D NETWORK [conn2] finishing on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: { conn: "xxx/mongo_D1:27017,mongo_D2:27017", vinfo: "test.test @ 1|0||54a1d0e128d48a01b1441eea", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } 2014-12-29T22:08:44.876+0000 D NETWORK [conn2] finished on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: { conn: "(done)", vinfo: "test.test @ 1|0||54a1d0e128d48a01b1441eea", cursor: { _id: ObjectId('54a1d0e66357ef0f03000e5a'), mongo_D1: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_D2: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_CFG1: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG2: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG3: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_S1: [ "alex/mongodb_1", "mongos --configdb mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017", 27017 ] }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } 2014-12-29T22:08:44.876+0000 D SHARDING [conn2] Request::process end ns: test.test msg id: 1512662069 op: 2004 attempt: 0 0ms 2014-12-29T22:08:50.882+0000 D SHARDING [conn2] Request::process begin ns: test.test msg id: 604305918 op: 2004 attempt: 0 2014-12-29T22:08:50.882+0000 D SHARDING [conn2] query: test.test { $query: {}, $readPreference: { mode: "primaryPreferred" } } ntoreturn: -1 options: 4 2014-12-29T22:08:50.882+0000 D NETWORK [conn2] creating pcursor over QSpec { ns: "test.test", n2skip: 0, n2return: -1, options: 4, query: { $query: {}, $readPreference: { mode: "primaryPreferred" } }, fields: {} } and CInfo { v_ns: "", filter: {} } 2014-12-29T22:08:50.882+0000 D QUERY [conn2] [QLOG] Beginning planning... ============================= Options = NO_TABLE_SCAN Canonical query: ns=test.test limit=0 skip=0 Tree: $and Sort: {} Proj: {} ============================= 2014-12-29T22:08:50.882+0000 D QUERY [conn2] [QLOG] Index 0 is kp: { _id: 1 } 2014-12-29T22:08:50.882+0000 D QUERY [conn2] [QLOG] Rated tree: $and 2014-12-29T22:08:50.882+0000 D QUERY [conn2] [QLOG] Planner: outputted 0 indexed solutions. 2014-12-29T22:08:50.882+0000 D NETWORK [conn2] initializing over 1 shards required by [test.test @ 1|0||54a1d0e128d48a01b1441eea] 2014-12-29T22:08:50.882+0000 D NETWORK [conn2] initializing on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } 2014-12-29T22:08:50.882+0000 D NETWORK [conn2] polling for status of connection to 192.168.0.95:27017, no events 2014-12-29T22:08:50.882+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:50.882+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:50.882+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:50.882+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:50.882+0000 D NETWORK [conn2] dbclient_rs say using secondary or tagged node selection in xxx, read pref is { pref: "primary pref", tags: [ {} ] } (primary : mongo_D1:27017, lastTagged : [not cached]) 2014-12-29T22:08:50.882+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:50.882+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:50.882+0000 D NETWORK [conn2] dbclient_rs selecting primary node mongo_D1:27017 2014-12-29T22:08:50.883+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:50.883+0000 D NETWORK [conn2] initialized query (lazily) on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: { conn: "xxx/mongo_D1:27017,mongo_D2:27017", vinfo: "test.test @ 1|0||54a1d0e128d48a01b1441eea", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } 2014-12-29T22:08:50.883+0000 D NETWORK [conn2] finishing over 1 shards 2014-12-29T22:08:50.883+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:50.883+0000 D NETWORK [conn2] finishing on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: { conn: "xxx/mongo_D1:27017,mongo_D2:27017", vinfo: "test.test @ 1|0||54a1d0e128d48a01b1441eea", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } 2014-12-29T22:08:50.883+0000 D NETWORK [conn2] finished on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: { conn: "(done)", vinfo: "test.test @ 1|0||54a1d0e128d48a01b1441eea", cursor: { _id: ObjectId('54a1d0e66357ef0f03000e5a'), mongo_D1: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_D2: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_CFG1: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG2: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG3: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_S1: [ "alex/mongodb_1", "mongos --configdb mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017", 27017 ] }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } 2014-12-29T22:08:50.883+0000 D SHARDING [conn2] Request::process end ns: test.test msg id: 604305918 op: 2004 attempt: 0 0ms 2014-12-29T22:08:51.942+0000 D SHARDING [LockPinger] distributed lock pinger 'mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017/mongo_S1:27017:1419890901:1804289383' about to ping. 2014-12-29T22:08:51.942+0000 D NETWORK [LockPinger] polling for status of connection to 192.168.0.97:27017, no events 2014-12-29T22:08:51.942+0000 D NETWORK [LockPinger] polling for status of connection to 192.168.0.98:27017, no events 2014-12-29T22:08:51.942+0000 D NETWORK [LockPinger] polling for status of connection to 192.168.0.99:27017, no events 2014-12-29T22:08:52.114+0000 I SHARDING [LockPinger] cluster mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017 pinged successfully at Mon Dec 29 22:08:51 2014 by distributed lock pinger 'mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017/mongo_S1:27017:1419890901:1804289383', sleeping for 30000ms 2014-12-29T22:08:52.637+0000 D - [UserCacheInvalidator] User Assertion: 13053:help failed: { ok: 0.0, errmsg: "no such cmd: _getUserCacheGeneration", code: 59, bad cmd: { _getUserCacheGeneration: "1", help: 1 } } 2014-12-29T22:08:52.637+0000 I NETWORK [UserCacheInvalidator] scoped connection to mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017 not being returned to the pool 2014-12-29T22:08:52.637+0000 W ACCESS [UserCacheInvalidator] An error occurred while fetching current user cache generation to check if user cache needs invalidation: Location13053 help failed: { ok: 0.0, errmsg: "no such cmd: _getUserCacheGeneration", code: 59, bad cmd: { _getUserCacheGeneration: "1", help: 1 } } 2014-12-29T22:08:52.637+0000 I ACCESS [UserCacheInvalidator] User cache generation changed from 54a1d0d628d48a01b1441ee3 to 000000000000000000000000; invalidating user cache 2014-12-29T22:08:53.460+0000 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: xxx 2014-12-29T22:08:53.460+0000 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set xxx 2014-12-29T22:08:53.460+0000 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.0.95:27017, no events 2014-12-29T22:08:53.460+0000 D NETWORK [ReplicaSetMonitorWatcher] Updating host mongo_D1:27017 based on ismaster reply: { setName: "xxx", setVersion: 1, ismaster: true, secondary: false, hosts: [ "mongo_D1:27017", "mongo_D2:27017" ], primary: "mongo_D1:27017", me: "mongo_D1:27017", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1419890933460), maxWireVersion: 2, minWireVersion: 0, ok: 1.0 } 2014-12-29T22:08:53.460+0000 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.0.96:27017, no events 2014-12-29T22:08:53.461+0000 D NETWORK [ReplicaSetMonitorWatcher] Updating host mongo_D2:27017 based on ismaster reply: { setName: "xxx", setVersion: 1, ismaster: false, secondary: true, hosts: [ "mongo_D2:27017", "mongo_D1:27017" ], primary: "mongo_D1:27017", me: "mongo_D2:27017", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1419890933461), maxWireVersion: 2, minWireVersion: 0, ok: 1.0 } 2014-12-29T22:08:54.162+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.97:27017, no events 2014-12-29T22:08:54.162+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.98:27017, no events 2014-12-29T22:08:54.162+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.99:27017, no events 2014-12-29T22:08:54.162+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.97:27017, no events 2014-12-29T22:08:54.162+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.98:27017, no events 2014-12-29T22:08:54.162+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.99:27017, no events 2014-12-29T22:08:54.162+0000 I NETWORK [Balancer] SyncClusterConnection connecting to [mongo_CFG1:27017] 2014-12-29T22:08:54.163+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:08:54.163+0000 D NETWORK [Balancer] connected to server mongo_CFG1:27017 (192.168.0.97) 2014-12-29T22:08:54.163+0000 I NETWORK [Balancer] SyncClusterConnection connecting to [mongo_CFG2:27017] 2014-12-29T22:08:54.163+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:08:54.163+0000 D NETWORK [Balancer] connected to server mongo_CFG2:27017 (192.168.0.98) 2014-12-29T22:08:54.163+0000 I NETWORK [Balancer] SyncClusterConnection connecting to [mongo_CFG3:27017] 2014-12-29T22:08:54.164+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:08:54.164+0000 D NETWORK [Balancer] connected to server mongo_CFG3:27017 (192.168.0.99) 2014-12-29T22:08:54.164+0000 D SHARDING [Balancer] found 1 shards listed on config server(s): SyncClusterConnection [mongo_CFG1:27017 (192.168.0.97),mongo_CFG2:27017 (192.168.0.98),mongo_CFG3:27017 (192.168.0.99)] 2014-12-29T22:08:54.164+0000 D SHARDING [Balancer] Refreshing MaxChunkSize: 64MB 2014-12-29T22:08:54.164+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.95:27017, no events 2014-12-29T22:08:54.164+0000 D NETWORK [Balancer] ReplicaSetMonitor::get xxx 2014-12-29T22:08:54.165+0000 D NETWORK [Balancer] dbclient_rs findOne to primary node in xxx 2014-12-29T22:08:54.165+0000 D NETWORK [Balancer] ReplicaSetMonitor::get xxx 2014-12-29T22:08:54.165+0000 D SHARDING [Balancer] trying to acquire new distributed lock for balancer on mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017 ( lock timeout : 900000, ping interval : 30000, process : mongo_S1:27017:1419890901:1804289383 ) 2014-12-29T22:08:54.165+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.97:27017, no events 2014-12-29T22:08:54.165+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.98:27017, no events 2014-12-29T22:08:54.165+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.99:27017, no events 2014-12-29T22:08:54.165+0000 D SHARDING [Balancer] about to acquire distributed lock 'balancer/mongo_S1:27017:1419890901:1804289383' 2014-12-29T22:08:54.165+0000 D SHARDING [Balancer] trying to acquire lock { _id: "balancer", state: 0, ts: ObjectId('54a1d0eb28d48a01b1441eec') } with details { state: 1, who: "mongo_S1:27017:1419890901:1804289383:Balancer:1681692777", process: "mongo_S1:27017:1419890901:1804289383", when: new Date(1419890934165), why: "doing balance round", ts: ObjectId('54a1d0f628d48a01b1441ef0') } 2014-12-29T22:08:54.339+0000 I SHARDING [Balancer] distributed lock 'balancer/mongo_S1:27017:1419890901:1804289383' acquired, ts : 54a1d0f628d48a01b1441ef0 2014-12-29T22:08:54.339+0000 D SHARDING [Balancer] *** start balancing round. waitForDelete: 0, secondaryThrottle: default 2014-12-29T22:08:54.339+0000 D SHARDING [Balancer] can't balance without more active shards 2014-12-29T22:08:54.339+0000 D SHARDING [Balancer] no need to move any chunk 2014-12-29T22:08:54.339+0000 D SHARDING [Balancer] about to log balancer result: { server: "mongo_S1", time: new Date(1419890934339), what: "balancer.round", details: { executionTimeMillis: 177, errorOccured: false, candidateChunks: 0, chunksMoved: 0 } } 2014-12-29T22:08:54.348+0000 D SHARDING [Balancer] *** end of balancing round 2014-12-29T22:08:54.411+0000 I SHARDING [Balancer] distributed lock 'balancer/mongo_S1:27017:1419890901:1804289383' unlocked. 2014-12-29T22:08:56.886+0000 D SHARDING [conn2] Request::process begin ns: test.test msg id: -1248763938 op: 2004 attempt: 0 2014-12-29T22:08:56.886+0000 D SHARDING [conn2] query: test.test { $query: {}, $readPreference: { mode: "primaryPreferred" } } ntoreturn: -1 options: 4 2014-12-29T22:08:56.886+0000 D NETWORK [conn2] creating pcursor over QSpec { ns: "test.test", n2skip: 0, n2return: -1, options: 4, query: { $query: {}, $readPreference: { mode: "primaryPreferred" } }, fields: {} } and CInfo { v_ns: "", filter: {} } 2014-12-29T22:08:56.886+0000 D QUERY [conn2] [QLOG] Beginning planning... ============================= Options = NO_TABLE_SCAN Canonical query: ns=test.test limit=0 skip=0 Tree: $and Sort: {} Proj: {} ============================= 2014-12-29T22:08:56.886+0000 D QUERY [conn2] [QLOG] Index 0 is kp: { _id: 1 } 2014-12-29T22:08:56.886+0000 D QUERY [conn2] [QLOG] Rated tree: $and 2014-12-29T22:08:56.886+0000 D QUERY [conn2] [QLOG] Planner: outputted 0 indexed solutions. 2014-12-29T22:08:56.886+0000 D NETWORK [conn2] initializing over 1 shards required by [test.test @ 1|0||54a1d0e128d48a01b1441eea] 2014-12-29T22:08:56.886+0000 D NETWORK [conn2] initializing on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } 2014-12-29T22:08:56.886+0000 D NETWORK [conn2] polling for status of connection to 192.168.0.95:27017, no events 2014-12-29T22:08:56.886+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:56.886+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:56.886+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:56.886+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:56.886+0000 D NETWORK [conn2] dbclient_rs say using secondary or tagged node selection in xxx, read pref is { pref: "primary pref", tags: [ {} ] } (primary : mongo_D1:27017, lastTagged : [not cached]) 2014-12-29T22:08:56.886+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:56.886+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:56.886+0000 D NETWORK [conn2] dbclient_rs selecting primary node mongo_D1:27017 2014-12-29T22:08:56.886+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:56.886+0000 D NETWORK [conn2] initialized query (lazily) on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: { conn: "xxx/mongo_D1:27017,mongo_D2:27017", vinfo: "test.test @ 1|0||54a1d0e128d48a01b1441eea", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } 2014-12-29T22:08:56.886+0000 D NETWORK [conn2] finishing over 1 shards 2014-12-29T22:08:56.886+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:08:56.886+0000 D NETWORK [conn2] finishing on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: { conn: "xxx/mongo_D1:27017,mongo_D2:27017", vinfo: "test.test @ 1|0||54a1d0e128d48a01b1441eea", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } 2014-12-29T22:08:56.886+0000 D NETWORK [conn2] finished on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: { conn: "(done)", vinfo: "test.test @ 1|0||54a1d0e128d48a01b1441eea", cursor: { _id: ObjectId('54a1d0e66357ef0f03000e5a'), mongo_D1: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_D2: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_CFG1: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG2: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG3: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_S1: [ "alex/mongodb_1", "mongos --configdb mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017", 27017 ] }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } 2014-12-29T22:08:56.886+0000 D SHARDING [conn2] Request::process end ns: test.test msg id: -1248763938 op: 2004 attempt: 0 0ms 2014-12-29T22:09:02.892+0000 D SHARDING [conn2] Request::process begin ns: test.test msg id: -1485401471 op: 2004 attempt: 0 2014-12-29T22:09:02.892+0000 D SHARDING [conn2] query: test.test { $query: {}, $readPreference: { mode: "primaryPreferred" } } ntoreturn: -1 options: 4 2014-12-29T22:09:02.893+0000 D NETWORK [conn2] creating pcursor over QSpec { ns: "test.test", n2skip: 0, n2return: -1, options: 4, query: { $query: {}, $readPreference: { mode: "primaryPreferred" } }, fields: {} } and CInfo { v_ns: "", filter: {} } 2014-12-29T22:09:02.893+0000 D QUERY [conn2] [QLOG] Beginning planning... ============================= Options = NO_TABLE_SCAN Canonical query: ns=test.test limit=0 skip=0 Tree: $and Sort: {} Proj: {} ============================= 2014-12-29T22:09:02.893+0000 D QUERY [conn2] [QLOG] Index 0 is kp: { _id: 1 } 2014-12-29T22:09:02.893+0000 D QUERY [conn2] [QLOG] Rated tree: $and 2014-12-29T22:09:02.893+0000 D QUERY [conn2] [QLOG] Planner: outputted 0 indexed solutions. 2014-12-29T22:09:02.893+0000 D NETWORK [conn2] initializing over 1 shards required by [test.test @ 1|0||54a1d0e128d48a01b1441eea] 2014-12-29T22:09:02.893+0000 D NETWORK [conn2] initializing on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } 2014-12-29T22:09:02.893+0000 D NETWORK [conn2] polling for status of connection to 192.168.0.95:27017, no events 2014-12-29T22:09:02.893+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:09:02.893+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:09:02.893+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:09:02.893+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:09:02.893+0000 D NETWORK [conn2] dbclient_rs say using secondary or tagged node selection in xxx, read pref is { pref: "primary pref", tags: [ {} ] } (primary : mongo_D1:27017, lastTagged : [not cached]) 2014-12-29T22:09:02.893+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:09:02.893+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:09:02.893+0000 D NETWORK [conn2] dbclient_rs selecting primary node mongo_D1:27017 2014-12-29T22:09:02.893+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:09:02.893+0000 D NETWORK [conn2] initialized query (lazily) on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: { conn: "xxx/mongo_D1:27017,mongo_D2:27017", vinfo: "test.test @ 1|0||54a1d0e128d48a01b1441eea", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } 2014-12-29T22:09:02.893+0000 D NETWORK [conn2] finishing over 1 shards 2014-12-29T22:09:02.893+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:09:02.893+0000 D NETWORK [conn2] finishing on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: { conn: "xxx/mongo_D1:27017,mongo_D2:27017", vinfo: "test.test @ 1|0||54a1d0e128d48a01b1441eea", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } 2014-12-29T22:09:02.893+0000 D NETWORK [conn2] finished on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: { conn: "(done)", vinfo: "test.test @ 1|0||54a1d0e128d48a01b1441eea", cursor: { _id: ObjectId('54a1d0e66357ef0f03000e5a'), mongo_D1: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_D2: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_CFG1: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG2: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG3: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_S1: [ "alex/mongodb_1", "mongos --configdb mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017", 27017 ] }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } 2014-12-29T22:09:02.893+0000 D SHARDING [conn2] Request::process end ns: test.test msg id: -1485401471 op: 2004 attempt: 0 0ms 2014-12-29T22:09:03.461+0000 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: xxx 2014-12-29T22:09:03.461+0000 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set xxx 2014-12-29T22:09:03.461+0000 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.0.95:27017, no events 2014-12-29T22:09:03.461+0000 D NETWORK [ReplicaSetMonitorWatcher] Updating host mongo_D1:27017 based on ismaster reply: { setName: "xxx", setVersion: 1, ismaster: true, secondary: false, hosts: [ "mongo_D1:27017", "mongo_D2:27017" ], primary: "mongo_D1:27017", me: "mongo_D1:27017", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1419890943461), maxWireVersion: 2, minWireVersion: 0, ok: 1.0 } 2014-12-29T22:09:03.461+0000 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.0.96:27017, no events 2014-12-29T22:09:03.461+0000 D NETWORK [ReplicaSetMonitorWatcher] Updating host mongo_D2:27017 based on ismaster reply: { setName: "xxx", setVersion: 1, ismaster: false, secondary: true, hosts: [ "mongo_D2:27017", "mongo_D1:27017" ], primary: "mongo_D1:27017", me: "mongo_D2:27017", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1419890943461), maxWireVersion: 2, minWireVersion: 0, ok: 1.0 } 2014-12-29T22:09:04.412+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.97:27017, no events 2014-12-29T22:09:04.412+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.98:27017, no events 2014-12-29T22:09:04.412+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.99:27017, no events 2014-12-29T22:09:04.412+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.97:27017, no events 2014-12-29T22:09:04.412+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.98:27017, no events 2014-12-29T22:09:04.412+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.99:27017, no events 2014-12-29T22:09:04.413+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.97:27017, no events 2014-12-29T22:09:04.413+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.98:27017, no events 2014-12-29T22:09:04.413+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.99:27017, no events 2014-12-29T22:09:04.413+0000 D SHARDING [Balancer] found 1 shards listed on config server(s): SyncClusterConnection [mongo_CFG1:27017 (192.168.0.97),mongo_CFG2:27017 (192.168.0.98),mongo_CFG3:27017 (192.168.0.99)] 2014-12-29T22:09:04.413+0000 D SHARDING [Balancer] Refreshing MaxChunkSize: 64MB 2014-12-29T22:09:04.413+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.95:27017, no events 2014-12-29T22:09:04.413+0000 D NETWORK [Balancer] ReplicaSetMonitor::get xxx 2014-12-29T22:09:04.413+0000 D NETWORK [Balancer] dbclient_rs findOne to primary node in xxx 2014-12-29T22:09:04.413+0000 D NETWORK [Balancer] ReplicaSetMonitor::get xxx 2014-12-29T22:09:04.413+0000 D SHARDING [Balancer] trying to acquire new distributed lock for balancer on mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017 ( lock timeout : 900000, ping interval : 30000, process : mongo_S1:27017:1419890901:1804289383 ) 2014-12-29T22:09:04.413+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.97:27017, no events 2014-12-29T22:09:04.413+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.98:27017, no events 2014-12-29T22:09:04.413+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.99:27017, no events 2014-12-29T22:09:04.414+0000 D SHARDING [Balancer] about to acquire distributed lock 'balancer/mongo_S1:27017:1419890901:1804289383' 2014-12-29T22:09:04.414+0000 D SHARDING [Balancer] trying to acquire lock { _id: "balancer", state: 0, ts: ObjectId('54a1d0f628d48a01b1441ef0') } with details { state: 1, who: "mongo_S1:27017:1419890901:1804289383:Balancer:1681692777", process: "mongo_S1:27017:1419890901:1804289383", when: new Date(1419890944414), why: "doing balance round", ts: ObjectId('54a1d10028d48a01b1441ef2') } 2014-12-29T22:09:04.552+0000 I SHARDING [Balancer] distributed lock 'balancer/mongo_S1:27017:1419890901:1804289383' acquired, ts : 54a1d10028d48a01b1441ef2 2014-12-29T22:09:04.552+0000 D SHARDING [Balancer] *** start balancing round. waitForDelete: 0, secondaryThrottle: default 2014-12-29T22:09:04.552+0000 D SHARDING [Balancer] can't balance without more active shards 2014-12-29T22:09:04.552+0000 D SHARDING [Balancer] no need to move any chunk 2014-12-29T22:09:04.552+0000 D SHARDING [Balancer] about to log balancer result: { server: "mongo_S1", time: new Date(1419890944552), what: "balancer.round", details: { executionTimeMillis: 140, errorOccured: false, candidateChunks: 0, chunksMoved: 0 } } 2014-12-29T22:09:04.563+0000 D SHARDING [Balancer] *** end of balancing round 2014-12-29T22:09:04.693+0000 I SHARDING [Balancer] distributed lock 'balancer/mongo_S1:27017:1419890901:1804289383' unlocked. 2014-12-29T22:09:13.461+0000 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: xxx 2014-12-29T22:09:13.462+0000 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set xxx 2014-12-29T22:09:13.462+0000 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.0.95:27017, no events 2014-12-29T22:09:13.462+0000 D NETWORK [ReplicaSetMonitorWatcher] Updating host mongo_D1:27017 based on ismaster reply: { setName: "xxx", setVersion: 1, ismaster: true, secondary: false, hosts: [ "mongo_D1:27017", "mongo_D2:27017" ], primary: "mongo_D1:27017", me: "mongo_D1:27017", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1419890953462), maxWireVersion: 2, minWireVersion: 0, ok: 1.0 } 2014-12-29T22:09:13.462+0000 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.0.96:27017, no events 2014-12-29T22:09:13.462+0000 D NETWORK [ReplicaSetMonitorWatcher] Updating host mongo_D2:27017 based on ismaster reply: { setName: "xxx", setVersion: 1, ismaster: false, secondary: true, hosts: [ "mongo_D2:27017", "mongo_D1:27017" ], primary: "mongo_D1:27017", me: "mongo_D2:27017", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1419890953462), maxWireVersion: 2, minWireVersion: 0, ok: 1.0 } 2014-12-29T22:09:14.693+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.97:27017, no events 2014-12-29T22:09:14.693+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.98:27017, no events 2014-12-29T22:09:14.693+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.99:27017, no events 2014-12-29T22:09:14.694+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.97:27017, no events 2014-12-29T22:09:14.694+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.98:27017, no events 2014-12-29T22:09:14.694+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.99:27017, no events 2014-12-29T22:09:21.230+0000 D NETWORK polling for status of connection to 192.168.0.97:27017, no events 2014-12-29T22:09:22.114+0000 D SHARDING [LockPinger] distributed lock pinger 'mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017/mongo_S1:27017:1419890901:1804289383' about to ping. 2014-12-29T22:09:22.114+0000 D NETWORK [LockPinger] polling for status of connection to 192.168.0.97:27017, no events 2014-12-29T22:09:22.114+0000 D NETWORK [LockPinger] polling for status of connection to 192.168.0.98:27017, no events 2014-12-29T22:09:22.114+0000 D NETWORK [LockPinger] polling for status of connection to 192.168.0.99:27017, no events 2014-12-29T22:09:22.636+0000 I NETWORK [UserCacheInvalidator] SyncClusterConnection connecting to [mongo_CFG1:27017] 2014-12-29T22:09:22.636+0000 D NETWORK [PeriodicTaskRunner] polling for status of connection to 192.168.0.97:27017, no events 2014-12-29T22:09:22.637+0000 D NETWORK [PeriodicTaskRunner] polling for status of connection to 192.168.0.97:27017, no events 2014-12-29T22:09:22.637+0000 D NETWORK [PeriodicTaskRunner] polling for status of connection to 192.168.0.98:27017, no events 2014-12-29T22:09:22.637+0000 D NETWORK [PeriodicTaskRunner] polling for status of connection to 192.168.0.99:27017, no events 2014-12-29T22:09:22.637+0000 D NETWORK [PeriodicTaskRunner] polling for status of connection to 192.168.0.98:27017, no events 2014-12-29T22:09:22.637+0000 D NETWORK [PeriodicTaskRunner] polling for status of connection to 192.168.0.98:27017, no events 2014-12-29T22:09:22.637+0000 D NETWORK [PeriodicTaskRunner] polling for status of connection to 192.168.0.99:27017, no events 2014-12-29T22:09:22.637+0000 D NETWORK [PeriodicTaskRunner] polling for status of connection to 192.168.0.99:27017, no events 2014-12-29T22:09:22.637+0000 D NETWORK [PeriodicTaskRunner] polling for status of connection to 192.168.0.95:27017, no events 2014-12-29T22:09:22.637+0000 D NETWORK [PeriodicTaskRunner] polling for status of connection to 192.168.0.96:27017, no events 2014-12-29T22:09:22.637+0000 D NETWORK [PeriodicTaskRunner] polling for status of connection to 192.168.0.95:27017, no events 2014-12-29T22:09:22.637+0000 D COMMAND [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 0ms 2014-12-29T22:09:22.637+0000 D NETWORK [PeriodicTaskRunner] polling for status of connection to 192.168.0.97:27017, no events 2014-12-29T22:09:22.637+0000 D NETWORK [PeriodicTaskRunner] polling for status of connection to 192.168.0.98:27017, no events 2014-12-29T22:09:22.637+0000 D NETWORK [PeriodicTaskRunner] polling for status of connection to 192.168.0.99:27017, no events 2014-12-29T22:09:22.637+0000 D NETWORK [PeriodicTaskRunner] polling for status of connection to 192.168.0.95:27017, no events 2014-12-29T22:09:22.637+0000 D NETWORK [PeriodicTaskRunner] polling for status of connection to 192.168.0.95:27017, no events 2014-12-29T22:09:22.637+0000 D COMMAND [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 0ms 2014-12-29T22:09:22.637+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:09:23.462+0000 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: xxx 2014-12-29T22:09:23.462+0000 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set xxx 2014-12-29T22:09:23.463+0000 D NETWORK [ReplicaSetMonitorWatcher] Updating host mongo_D1:27017 based on ismaster reply: { setName: "xxx", setVersion: 1, ismaster: true, secondary: false, hosts: [ "mongo_D1:27017", "mongo_D2:27017" ], primary: "mongo_D1:27017", me: "mongo_D1:27017", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1419890963463), maxWireVersion: 2, minWireVersion: 0, ok: 1.0 } 2014-12-29T22:09:23.463+0000 D NETWORK [ReplicaSetMonitorWatcher] Updating host mongo_D2:27017 based on ismaster reply: { setName: "xxx", setVersion: 1, ismaster: false, secondary: true, hosts: [ "mongo_D2:27017", "mongo_D1:27017" ], primary: "mongo_D1:27017", me: "mongo_D2:27017", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1419890963463), maxWireVersion: 2, minWireVersion: 0, ok: 1.0 } 2014-12-29T22:09:27.637+0000 W NETWORK [UserCacheInvalidator] Failed to connect to 192.168.0.97:27017 after 5000 milliseconds, giving up. 2014-12-29T22:09:27.637+0000 I NETWORK [UserCacheInvalidator] SyncClusterConnection connect fail to: mongo_CFG1:27017 errmsg: couldn't connect to server mongo_CFG1:27017 (192.168.0.97), connection attempt failed 2014-12-29T22:09:27.637+0000 I NETWORK [UserCacheInvalidator] SyncClusterConnection connecting to [mongo_CFG2:27017] 2014-12-29T22:09:27.638+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:09:27.638+0000 D NETWORK [UserCacheInvalidator] connected to server mongo_CFG2:27017 (192.168.0.98) 2014-12-29T22:09:27.638+0000 I NETWORK [UserCacheInvalidator] SyncClusterConnection connecting to [mongo_CFG3:27017] 2014-12-29T22:09:27.638+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:09:27.639+0000 D NETWORK [UserCacheInvalidator] connected to server mongo_CFG3:27017 (192.168.0.99) 2014-12-29T22:09:27.639+0000 I NETWORK [UserCacheInvalidator] unable to set SO_RCVTIMEO 2014-12-29T22:09:27.640+0000 I NETWORK [UserCacheInvalidator] trying reconnect to mongo_CFG1:27017 (192.168.0.97) failed 2014-12-29T22:09:27.640+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:09:27.776+0000 I NETWORK [Balancer] Socket recv() errno:110 Connection timed out 192.168.0.97:27017 2014-12-29T22:09:27.776+0000 I NETWORK [Balancer] SocketException: remote: 192.168.0.97:27017 error: 9001 socket exception [RECV_ERROR] server [192.168.0.97:27017] 2014-12-29T22:09:27.776+0000 D - [Balancer] User Assertion: 17255:error receiving write command response, possible socket exception - see logs 2014-12-29T22:09:27.776+0000 I NETWORK [Balancer] Detected bad connection created at 1419890913644542 microSec, clearing pool for mongo_CFG1:27017 of 1 connections 2014-12-29T22:09:27.776+0000 I NETWORK [Balancer] SyncClusterConnection connecting to [mongo_CFG1:27017] 2014-12-29T22:09:27.776+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:09:32.640+0000 W NETWORK [UserCacheInvalidator] Failed to connect to 192.168.0.97:27017 after 5000 milliseconds, giving up. 2014-12-29T22:09:32.640+0000 I NETWORK [UserCacheInvalidator] reconnect mongo_CFG1:27017 (192.168.0.97) failed failed couldn't connect to server mongo_CFG1:27017 (192.168.0.97), connection attempt failed 2014-12-29T22:09:32.640+0000 I NETWORK [UserCacheInvalidator] query on admin.$cmd: { _getUserCacheGeneration: "1", help: 1 } failed to: mongo_CFG1:27017 (192.168.0.97) failed exception: socket exception [CONNECT_ERROR] for mongo_CFG1:27017 (192.168.0.97) failed 2014-12-29T22:09:32.641+0000 D - [UserCacheInvalidator] User Assertion: 13053:help failed: { ok: 0.0, errmsg: "no such cmd: _getUserCacheGeneration", code: 59, bad cmd: { _getUserCacheGeneration: "1", help: 1 } } 2014-12-29T22:09:32.641+0000 I NETWORK [UserCacheInvalidator] scoped connection to mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017 not being returned to the pool 2014-12-29T22:09:32.641+0000 W ACCESS [UserCacheInvalidator] An error occurred while fetching current user cache generation to check if user cache needs invalidation: Location13053 help failed: { ok: 0.0, errmsg: "no such cmd: _getUserCacheGeneration", code: 59, bad cmd: { _getUserCacheGeneration: "1", help: 1 } } 2014-12-29T22:09:32.776+0000 W NETWORK [Balancer] Failed to connect to 192.168.0.97:27017 after 5000 milliseconds, giving up. 2014-12-29T22:09:32.776+0000 I NETWORK [Balancer] SyncClusterConnection connect fail to: mongo_CFG1:27017 errmsg: couldn't connect to server mongo_CFG1:27017 (192.168.0.97), connection attempt failed 2014-12-29T22:09:32.776+0000 I NETWORK [Balancer] SyncClusterConnection connecting to [mongo_CFG2:27017] 2014-12-29T22:09:32.777+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:09:32.777+0000 D NETWORK [Balancer] connected to server mongo_CFG2:27017 (192.168.0.98) 2014-12-29T22:09:32.777+0000 I NETWORK [Balancer] SyncClusterConnection connecting to [mongo_CFG3:27017] 2014-12-29T22:09:32.777+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:09:32.778+0000 D NETWORK [Balancer] connected to server mongo_CFG3:27017 (192.168.0.99) 2014-12-29T22:09:32.778+0000 I NETWORK [Balancer] unable to set SO_RCVTIMEO 2014-12-29T22:09:32.779+0000 I NETWORK [Balancer] trying reconnect to mongo_CFG1:27017 (192.168.0.97) failed 2014-12-29T22:09:32.779+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:09:33.463+0000 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: xxx 2014-12-29T22:09:33.463+0000 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set xxx 2014-12-29T22:09:33.463+0000 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.0.95:27017, no events 2014-12-29T22:09:33.464+0000 D NETWORK [ReplicaSetMonitorWatcher] Updating host mongo_D1:27017 based on ismaster reply: { setName: "xxx", setVersion: 1, ismaster: true, secondary: false, hosts: [ "mongo_D1:27017", "mongo_D2:27017" ], primary: "mongo_D1:27017", me: "mongo_D1:27017", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1419890973463), maxWireVersion: 2, minWireVersion: 0, ok: 1.0 } 2014-12-29T22:09:33.464+0000 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.0.96:27017, no events 2014-12-29T22:09:33.464+0000 D NETWORK [ReplicaSetMonitorWatcher] Updating host mongo_D2:27017 based on ismaster reply: { setName: "xxx", setVersion: 1, ismaster: false, secondary: true, hosts: [ "mongo_D2:27017", "mongo_D1:27017" ], primary: "mongo_D1:27017", me: "mongo_D2:27017", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1419890973464), maxWireVersion: 2, minWireVersion: 0, ok: 1.0 } 2014-12-29T22:09:34.320+0000 I NETWORK Socket recv() errno:110 Connection timed out 192.168.0.97:27017 2014-12-29T22:09:34.320+0000 I NETWORK SocketException: remote: 192.168.0.97:27017 error: 9001 socket exception [RECV_ERROR] server [192.168.0.97:27017] 2014-12-29T22:09:34.320+0000 I NETWORK DBClientCursor::init call() failed 2014-12-29T22:09:34.320+0000 D - User Assertion: 10276:DBClientBase::findN: transport error: mongo_CFG1:27017 ns: config.$cmd query: { dbhash: 1, collections: [ "chunks", "databases", "collections", "shards", "version" ] } 2014-12-29T22:09:34.320+0000 W SHARDING couldn't check dbhash on config server mongo_CFG1:27017 :: caused by :: 10276 DBClientBase::findN: transport error: mongo_CFG1:27017 ns: config.$cmd query: { dbhash: 1, collections: [ "chunks", "databases", "collections", "shards", "version" ] } 2014-12-29T22:09:34.320+0000 D NETWORK polling for status of connection to 192.168.0.98:27017, no events 2014-12-29T22:09:34.323+0000 D NETWORK polling for status of connection to 192.168.0.99:27017, no events 2014-12-29T22:09:35.200+0000 I NETWORK [LockPinger] Socket recv() errno:110 Connection timed out 192.168.0.97:27017 2014-12-29T22:09:35.200+0000 I NETWORK [LockPinger] SocketException: remote: 192.168.0.97:27017 error: 9001 socket exception [RECV_ERROR] server [192.168.0.97:27017] 2014-12-29T22:09:35.200+0000 I NETWORK [LockPinger] DBClientCursor::init call() failed 2014-12-29T22:09:35.200+0000 D - [LockPinger] User Assertion: 10276:DBClientBase::findN: transport error: mongo_CFG1:27017 ns: admin.$cmd query: { resetError: 1 } 2014-12-29T22:09:35.232+0000 I NETWORK [LockPinger] scoped connection to mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017 not being returned to the pool 2014-12-29T22:09:35.232+0000 W SHARDING [LockPinger] distributed lock pinger 'mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017/mongo_S1:27017:1419890901:1804289383' detected an exception while pinging. :: caused by :: SyncClusterConnection::update prepare failed: mongo_CFG1:27017 (192.168.0.97) failed:10276 DBClientBase::findN: transport error: mongo_CFG1:27017 ns: admin.$cmd query: { resetError: 1 } 2014-12-29T22:09:37.779+0000 W NETWORK [Balancer] Failed to connect to 192.168.0.97:27017 after 5000 milliseconds, giving up. 2014-12-29T22:09:37.779+0000 I NETWORK [Balancer] reconnect mongo_CFG1:27017 (192.168.0.97) failed failed couldn't connect to server mongo_CFG1:27017 (192.168.0.97), connection attempt failed 2014-12-29T22:09:37.779+0000 I NETWORK [Balancer] query on config.shards: {} failed to: mongo_CFG1:27017 (192.168.0.97) failed exception: socket exception [CONNECT_ERROR] for mongo_CFG1:27017 (192.168.0.97) failed 2014-12-29T22:09:37.780+0000 D SHARDING [Balancer] found 1 shards listed on config server(s): SyncClusterConnection [mongo_CFG1:27017 (192.168.0.97) failed,mongo_CFG2:27017 (192.168.0.98),mongo_CFG3:27017 (192.168.0.99)] 2014-12-29T22:09:37.780+0000 I NETWORK [Balancer] SyncClusterConnection connecting to [mongo_CFG1:27017] 2014-12-29T22:09:37.780+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:09:38.989+0000 I NETWORK [mongosMain] connection accepted from 192.168.0.1:53628 #3 (2 connections now open) 2014-12-29T22:09:38.990+0000 I NETWORK [conn3] SyncClusterConnection connecting to [mongo_CFG1:27017] 2014-12-29T22:09:38.990+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:09:42.781+0000 W NETWORK [Balancer] Failed to connect to 192.168.0.97:27017 after 5000 milliseconds, giving up. 2014-12-29T22:09:42.781+0000 I NETWORK [Balancer] SyncClusterConnection connect fail to: mongo_CFG1:27017 errmsg: couldn't connect to server mongo_CFG1:27017 (192.168.0.97), connection attempt failed 2014-12-29T22:09:42.781+0000 I NETWORK [Balancer] SyncClusterConnection connecting to [mongo_CFG2:27017] 2014-12-29T22:09:42.781+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:09:42.781+0000 D NETWORK [Balancer] connected to server mongo_CFG2:27017 (192.168.0.98) 2014-12-29T22:09:42.781+0000 I NETWORK [Balancer] SyncClusterConnection connecting to [mongo_CFG3:27017] 2014-12-29T22:09:42.782+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:09:42.782+0000 D NETWORK [Balancer] connected to server mongo_CFG3:27017 (192.168.0.99) 2014-12-29T22:09:42.782+0000 I NETWORK [Balancer] unable to set SO_RCVTIMEO 2014-12-29T22:09:42.783+0000 I NETWORK [Balancer] trying reconnect to mongo_CFG1:27017 (192.168.0.97) failed 2014-12-29T22:09:42.783+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:09:43.464+0000 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: xxx 2014-12-29T22:09:43.464+0000 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set xxx 2014-12-29T22:09:43.464+0000 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.0.95:27017, no events 2014-12-29T22:09:43.464+0000 D NETWORK [ReplicaSetMonitorWatcher] Updating host mongo_D1:27017 based on ismaster reply: { setName: "xxx", setVersion: 1, ismaster: true, secondary: false, hosts: [ "mongo_D1:27017", "mongo_D2:27017" ], primary: "mongo_D1:27017", me: "mongo_D1:27017", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1419890983464), maxWireVersion: 2, minWireVersion: 0, ok: 1.0 } 2014-12-29T22:09:43.464+0000 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.0.96:27017, no events 2014-12-29T22:09:43.465+0000 D NETWORK [ReplicaSetMonitorWatcher] Updating host mongo_D2:27017 based on ismaster reply: { setName: "xxx", setVersion: 1, ismaster: false, secondary: true, hosts: [ "mongo_D2:27017", "mongo_D1:27017" ], primary: "mongo_D1:27017", me: "mongo_D2:27017", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1419890983465), maxWireVersion: 2, minWireVersion: 0, ok: 1.0 } 2014-12-29T22:09:43.990+0000 W NETWORK [conn3] Failed to connect to 192.168.0.97:27017 after 5000 milliseconds, giving up. 2014-12-29T22:09:43.990+0000 I NETWORK [conn3] SyncClusterConnection connect fail to: mongo_CFG1:27017 errmsg: couldn't connect to server mongo_CFG1:27017 (192.168.0.97), connection attempt failed 2014-12-29T22:09:43.990+0000 I NETWORK [conn3] SyncClusterConnection connecting to [mongo_CFG2:27017] 2014-12-29T22:09:43.991+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:09:43.991+0000 D NETWORK [conn3] connected to server mongo_CFG2:27017 (192.168.0.98) 2014-12-29T22:09:43.991+0000 I NETWORK [conn3] SyncClusterConnection connecting to [mongo_CFG3:27017] 2014-12-29T22:09:43.991+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:09:43.991+0000 D NETWORK [conn3] connected to server mongo_CFG3:27017 (192.168.0.99) 2014-12-29T22:09:43.991+0000 I NETWORK [conn3] unable to set SO_RCVTIMEO 2014-12-29T22:09:43.992+0000 I NETWORK [conn3] trying reconnect to mongo_CFG1:27017 (192.168.0.97) failed 2014-12-29T22:09:43.993+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:09:47.784+0000 W NETWORK [Balancer] Failed to connect to 192.168.0.97:27017 after 5000 milliseconds, giving up. 2014-12-29T22:09:47.784+0000 I NETWORK [Balancer] reconnect mongo_CFG1:27017 (192.168.0.97) failed failed couldn't connect to server mongo_CFG1:27017 (192.168.0.97), connection attempt failed 2014-12-29T22:09:47.784+0000 I NETWORK [Balancer] query on config.settings: { _id: "chunksize" } failed to: mongo_CFG1:27017 (192.168.0.97) failed exception: socket exception [CONNECT_ERROR] for mongo_CFG1:27017 (192.168.0.97) failed 2014-12-29T22:09:47.784+0000 D SHARDING [Balancer] Refreshing MaxChunkSize: 64MB 2014-12-29T22:09:47.784+0000 I NETWORK [Balancer] SyncClusterConnection connecting to [mongo_CFG1:27017] 2014-12-29T22:09:47.785+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:09:48.993+0000 W NETWORK [conn3] Failed to connect to 192.168.0.97:27017 after 5000 milliseconds, giving up. 2014-12-29T22:09:48.993+0000 I NETWORK [conn3] reconnect mongo_CFG1:27017 (192.168.0.97) failed failed couldn't connect to server mongo_CFG1:27017 (192.168.0.97), connection attempt failed 2014-12-29T22:09:48.993+0000 I NETWORK [conn3] query on config.databases: { _id: "admin" } failed to: mongo_CFG1:27017 (192.168.0.97) failed exception: socket exception [CONNECT_ERROR] for mongo_CFG1:27017 (192.168.0.97) failed 2014-12-29T22:09:48.993+0000 D SHARDING [conn3] DBConfig unserialize: admin { _id: "admin", partitioned: false, primary: "config" } 2014-12-29T22:09:48.995+0000 I NETWORK [conn3] trying reconnect to mongo_CFG1:27017 (192.168.0.97) failed 2014-12-29T22:09:48.995+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:09:52.636+0000 I NETWORK [UserCacheInvalidator] SyncClusterConnection connecting to [mongo_CFG1:27017] 2014-12-29T22:09:52.637+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:09:52.785+0000 W NETWORK [Balancer] Failed to connect to 192.168.0.97:27017 after 5000 milliseconds, giving up. 2014-12-29T22:09:52.785+0000 I NETWORK [Balancer] SyncClusterConnection connect fail to: mongo_CFG1:27017 errmsg: couldn't connect to server mongo_CFG1:27017 (192.168.0.97), connection attempt failed 2014-12-29T22:09:52.785+0000 I NETWORK [Balancer] SyncClusterConnection connecting to [mongo_CFG2:27017] 2014-12-29T22:09:52.785+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:09:52.786+0000 D NETWORK [Balancer] connected to server mongo_CFG2:27017 (192.168.0.98) 2014-12-29T22:09:52.786+0000 I NETWORK [Balancer] SyncClusterConnection connecting to [mongo_CFG3:27017] 2014-12-29T22:09:52.786+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:09:52.786+0000 D NETWORK [Balancer] connected to server mongo_CFG3:27017 (192.168.0.99) 2014-12-29T22:09:52.786+0000 I NETWORK [Balancer] unable to set SO_RCVTIMEO 2014-12-29T22:09:52.787+0000 I NETWORK [Balancer] trying reconnect to mongo_CFG1:27017 (192.168.0.97) failed 2014-12-29T22:09:52.788+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:09:53.465+0000 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: xxx 2014-12-29T22:09:53.465+0000 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set xxx 2014-12-29T22:09:53.465+0000 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.0.95:27017, no events 2014-12-29T22:09:53.465+0000 D NETWORK [ReplicaSetMonitorWatcher] Updating host mongo_D1:27017 based on ismaster reply: { setName: "xxx", setVersion: 1, ismaster: true, secondary: false, hosts: [ "mongo_D1:27017", "mongo_D2:27017" ], primary: "mongo_D1:27017", me: "mongo_D1:27017", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1419890993465), maxWireVersion: 2, minWireVersion: 0, ok: 1.0 } 2014-12-29T22:09:53.465+0000 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.0.96:27017, no events 2014-12-29T22:09:53.465+0000 D NETWORK [ReplicaSetMonitorWatcher] Updating host mongo_D2:27017 based on ismaster reply: { setName: "xxx", setVersion: 1, ismaster: false, secondary: true, hosts: [ "mongo_D2:27017", "mongo_D1:27017" ], primary: "mongo_D1:27017", me: "mongo_D2:27017", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1419890993465), maxWireVersion: 2, minWireVersion: 0, ok: 1.0 } 2014-12-29T22:09:53.995+0000 W NETWORK [conn3] Failed to connect to 192.168.0.97:27017 after 5000 milliseconds, giving up. 2014-12-29T22:09:53.995+0000 I NETWORK [conn3] reconnect mongo_CFG1:27017 (192.168.0.97) failed failed couldn't connect to server mongo_CFG1:27017 (192.168.0.97), connection attempt failed 2014-12-29T22:09:53.995+0000 I NETWORK [conn3] query on config.collections: { _id: /^admin\./ } failed to: mongo_CFG1:27017 (192.168.0.97) failed exception: socket exception [CONNECT_ERROR] for mongo_CFG1:27017 (192.168.0.97) failed 2014-12-29T22:09:53.996+0000 D SHARDING [conn3] found 0 dropped collections and 0 sharded collections for database admin 2014-12-29T22:09:53.996+0000 D SHARDING [conn3] Request::process begin ns: admin.$cmd msg id: -187085978 op: 2004 attempt: 0 2014-12-29T22:09:53.996+0000 D SHARDING [conn3] command: admin.$cmd { ismaster: 1 } ntoreturn: -1 options: 0 2014-12-29T22:09:53.996+0000 D SHARDING [conn3] Request::process end ns: admin.$cmd msg id: -187085978 op: 2004 attempt: 0 0ms 2014-12-29T22:09:53.997+0000 D SHARDING [conn2] Request::process begin ns: test.test msg id: -955908119 op: 2004 attempt: 0 2014-12-29T22:09:53.997+0000 D SHARDING [conn2] query: test.test { $query: {}, $readPreference: { mode: "primaryPreferred" } } ntoreturn: -1 options: 4 2014-12-29T22:09:53.997+0000 D NETWORK [conn2] creating pcursor over QSpec { ns: "test.test", n2skip: 0, n2return: -1, options: 4, query: { $query: {}, $readPreference: { mode: "primaryPreferred" } }, fields: {} } and CInfo { v_ns: "", filter: {} } 2014-12-29T22:09:53.997+0000 D QUERY [conn2] [QLOG] Beginning planning... ============================= Options = NO_TABLE_SCAN Canonical query: ns=test.test limit=0 skip=0 Tree: $and Sort: {} Proj: {} ============================= 2014-12-29T22:09:53.997+0000 D QUERY [conn2] [QLOG] Index 0 is kp: { _id: 1 } 2014-12-29T22:09:53.997+0000 D QUERY [conn2] [QLOG] Rated tree: $and 2014-12-29T22:09:53.997+0000 D QUERY [conn2] [QLOG] Planner: outputted 0 indexed solutions. 2014-12-29T22:09:53.997+0000 D NETWORK [conn2] initializing over 1 shards required by [test.test @ 1|0||54a1d0e128d48a01b1441eea] 2014-12-29T22:09:53.997+0000 D NETWORK [conn2] initializing on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } 2014-12-29T22:09:53.997+0000 D NETWORK [conn2] polling for status of connection to 192.168.0.95:27017, no events 2014-12-29T22:09:53.997+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:09:53.997+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:09:53.997+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:09:53.997+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:09:53.997+0000 D NETWORK [conn2] dbclient_rs say using secondary or tagged node selection in xxx, read pref is { pref: "primary pref", tags: [ {} ] } (primary : mongo_D1:27017, lastTagged : [not cached]) 2014-12-29T22:09:53.997+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:09:53.997+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:09:53.997+0000 D NETWORK [conn2] dbclient_rs selecting primary node mongo_D1:27017 2014-12-29T22:09:53.997+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:09:53.997+0000 D NETWORK [conn2] initialized query (lazily) on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: { conn: "xxx/mongo_D1:27017,mongo_D2:27017", vinfo: "test.test @ 1|0||54a1d0e128d48a01b1441eea", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } 2014-12-29T22:09:53.997+0000 D NETWORK [conn2] finishing over 1 shards 2014-12-29T22:09:53.997+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:09:53.997+0000 D NETWORK [conn2] finishing on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: { conn: "xxx/mongo_D1:27017,mongo_D2:27017", vinfo: "test.test @ 1|0||54a1d0e128d48a01b1441eea", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } 2014-12-29T22:09:53.997+0000 D NETWORK [conn2] finished on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: { conn: "(done)", vinfo: "test.test @ 1|0||54a1d0e128d48a01b1441eea", cursor: { _id: ObjectId('54a1d0e66357ef0f03000e5a'), mongo_D1: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_D2: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_CFG1: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG2: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG3: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_S1: [ "alex/mongodb_1", "mongos --configdb mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017", 27017 ] }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } 2014-12-29T22:09:53.997+0000 D SHARDING [conn2] Request::process end ns: test.test msg id: -955908119 op: 2004 attempt: 0 0ms 2014-12-29T22:09:53.998+0000 D SHARDING [conn3] Request::process begin ns: test.test msg id: 1483039653 op: 2004 attempt: 0 2014-12-29T22:09:53.998+0000 D SHARDING [conn3] query: test.test { $query: {}, $readPreference: { mode: "primaryPreferred" } } ntoreturn: -1 options: 4 2014-12-29T22:09:53.998+0000 D NETWORK [conn3] creating pcursor over QSpec { ns: "test.test", n2skip: 0, n2return: -1, options: 4, query: { $query: {}, $readPreference: { mode: "primaryPreferred" } }, fields: {} } and CInfo { v_ns: "", filter: {} } 2014-12-29T22:09:53.998+0000 D QUERY [conn3] [QLOG] Beginning planning... ============================= Options = NO_TABLE_SCAN Canonical query: ns=test.test limit=0 skip=0 Tree: $and Sort: {} Proj: {} ============================= 2014-12-29T22:09:53.998+0000 D QUERY [conn3] [QLOG] Index 0 is kp: { _id: 1 } 2014-12-29T22:09:53.998+0000 D QUERY [conn3] [QLOG] Rated tree: $and 2014-12-29T22:09:53.998+0000 D QUERY [conn3] [QLOG] Planner: outputted 0 indexed solutions. 2014-12-29T22:09:53.998+0000 D NETWORK [conn3] initializing over 1 shards required by [test.test @ 1|0||54a1d0e128d48a01b1441eea] 2014-12-29T22:09:53.998+0000 D NETWORK [conn3] initializing on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } 2014-12-29T22:09:53.998+0000 D NETWORK [conn3] ReplicaSetMonitor::get xxx 2014-12-29T22:09:53.998+0000 D NETWORK [conn3] ReplicaSetMonitor::get xxx 2014-12-29T22:09:53.998+0000 D NETWORK [conn3] ReplicaSetMonitor::get xxx 2014-12-29T22:09:53.998+0000 D NETWORK [conn3] ReplicaSetMonitor::get xxx 2014-12-29T22:09:53.998+0000 D NETWORK [conn3] dbclient_rs say using secondary or tagged node selection in xxx, read pref is { pref: "primary pref", tags: [ {} ] } (primary : mongo_D1:27017, lastTagged : [not cached]) 2014-12-29T22:09:53.998+0000 D NETWORK [conn3] ReplicaSetMonitor::get xxx 2014-12-29T22:09:53.998+0000 D NETWORK [conn3] ReplicaSetMonitor::get xxx 2014-12-29T22:09:53.998+0000 D NETWORK [conn3] dbclient_rs selecting primary node mongo_D1:27017 2014-12-29T22:09:53.999+0000 D NETWORK [conn3] ReplicaSetMonitor::get xxx 2014-12-29T22:09:53.999+0000 D NETWORK [conn3] initialized query (lazily) on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: { conn: "xxx/mongo_D1:27017,mongo_D2:27017", vinfo: "test.test @ 1|0||54a1d0e128d48a01b1441eea", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } 2014-12-29T22:09:53.999+0000 D NETWORK [conn3] finishing over 1 shards 2014-12-29T22:09:53.999+0000 D NETWORK [conn3] ReplicaSetMonitor::get xxx 2014-12-29T22:09:53.999+0000 D NETWORK [conn3] finishing on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: { conn: "xxx/mongo_D1:27017,mongo_D2:27017", vinfo: "test.test @ 1|0||54a1d0e128d48a01b1441eea", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } 2014-12-29T22:09:53.999+0000 D NETWORK [conn3] finished on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: { conn: "(done)", vinfo: "test.test @ 1|0||54a1d0e128d48a01b1441eea", cursor: { _id: ObjectId('54a1d0e66357ef0f03000e5a'), mongo_D1: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_D2: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_CFG1: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG2: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG3: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_S1: [ "alex/mongodb_1", "mongos --configdb mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017", 27017 ] }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } 2014-12-29T22:09:54.000+0000 D SHARDING [conn2] Request::process begin ns: test.$cmd msg id: -323597380 op: 2004 attempt: 0 2014-12-29T22:09:54.000+0000 D SHARDING [conn3] Request::process end ns: test.test msg id: 1483039653 op: 2004 attempt: 0 2ms 2014-12-29T22:09:54.000+0000 D SHARDING [conn2] command: test.$cmd { insert: "test", ordered: true, documents: [ { _id: ObjectId('54a1d0e66357ef0f03000e5a'), mongo_D1: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_D2: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_CFG1: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG2: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG3: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_S1: [ "alex/mongodb_1", "mongos --configdb mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017", 27017 ] } ] } ntoreturn: -1 options: 0 2014-12-29T22:09:54.000+0000 D SHARDING [conn2] starting execution of write batch of size 1 for test.test 2014-12-29T22:09:54.000+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:09:54.000+0000 D SHARDING [conn2] sending write batch to mongo_D1:27017: { insert: "test", documents: [ { _id: ObjectId('54a1d0e66357ef0f03000e5a'), mongo_D1: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_D2: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_CFG1: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG2: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG3: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_S1: [ "alex/mongodb_1", "mongos --configdb mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017", 27017 ] } ], ordered: true, metadata: { shardName: "xxx", shardVersion: [ Timestamp 1000|0, ObjectId('54a1d0e128d48a01b1441eea') ], session: 0 } } 2014-12-29T22:09:54.000+0000 D NETWORK [conn2] polling for status of connection to 192.168.0.95:27017, no events 2014-12-29T22:09:54.001+0000 D SHARDING [conn2] write results received from mongo_D1:27017: { ok: 1, n: 0, lastOp: Timestamp 1419890918000|1, electionId: ObjectId('54a1d0e0b12b60a304edb60b'), writeErrors: [ { index: 0, code: 11000, errmsg: "insertDocument :: caused by :: 11000 E11000 duplicate key error index: test.test.$_id_ dup key: { : ObjectId('54a1d0e66357ef0f03000e5a') }" } ] } 2014-12-29T22:09:54.002+0000 D SHARDING [conn2] finished execution of write batch with write errors for test.test 2014-12-29T22:09:54.002+0000 I NETWORK [conn2] SyncClusterConnection connecting to [mongo_CFG1:27017] 2014-12-29T22:09:54.002+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:09:57.637+0000 W NETWORK [UserCacheInvalidator] Failed to connect to 192.168.0.97:27017 after 5000 milliseconds, giving up. 2014-12-29T22:09:57.637+0000 I NETWORK [UserCacheInvalidator] SyncClusterConnection connect fail to: mongo_CFG1:27017 errmsg: couldn't connect to server mongo_CFG1:27017 (192.168.0.97), connection attempt failed 2014-12-29T22:09:57.637+0000 I NETWORK [UserCacheInvalidator] SyncClusterConnection connecting to [mongo_CFG2:27017] 2014-12-29T22:09:57.637+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:09:57.638+0000 D NETWORK [UserCacheInvalidator] connected to server mongo_CFG2:27017 (192.168.0.98) 2014-12-29T22:09:57.638+0000 I NETWORK [UserCacheInvalidator] SyncClusterConnection connecting to [mongo_CFG3:27017] 2014-12-29T22:09:57.638+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:09:57.638+0000 D NETWORK [UserCacheInvalidator] connected to server mongo_CFG3:27017 (192.168.0.99) 2014-12-29T22:09:57.638+0000 I NETWORK [UserCacheInvalidator] unable to set SO_RCVTIMEO 2014-12-29T22:09:57.639+0000 I NETWORK [UserCacheInvalidator] trying reconnect to mongo_CFG1:27017 (192.168.0.97) failed 2014-12-29T22:09:57.640+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:09:57.788+0000 W NETWORK [Balancer] Failed to connect to 192.168.0.97:27017 after 5000 milliseconds, giving up. 2014-12-29T22:09:57.788+0000 I NETWORK [Balancer] reconnect mongo_CFG1:27017 (192.168.0.97) failed failed couldn't connect to server mongo_CFG1:27017 (192.168.0.97), connection attempt failed 2014-12-29T22:09:57.788+0000 I NETWORK [Balancer] query on config.settings: { _id: "balancer" } failed to: mongo_CFG1:27017 (192.168.0.97) failed exception: socket exception [CONNECT_ERROR] for mongo_CFG1:27017 (192.168.0.97) failed 2014-12-29T22:09:57.788+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.95:27017, no events 2014-12-29T22:09:57.788+0000 D NETWORK [Balancer] ReplicaSetMonitor::get xxx 2014-12-29T22:09:57.788+0000 D NETWORK [Balancer] dbclient_rs findOne to primary node in xxx 2014-12-29T22:09:57.788+0000 D NETWORK [Balancer] ReplicaSetMonitor::get xxx 2014-12-29T22:09:57.788+0000 D SHARDING [Balancer] trying to acquire new distributed lock for balancer on mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017 ( lock timeout : 900000, ping interval : 30000, process : mongo_S1:27017:1419890901:1804289383 ) 2014-12-29T22:09:57.788+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.97:27017, no events 2014-12-29T22:09:57.788+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.98:27017, no events 2014-12-29T22:09:57.788+0000 D NETWORK [Balancer] polling for status of connection to 192.168.0.99:27017, no events 2014-12-29T22:09:59.002+0000 W NETWORK [conn2] Failed to connect to 192.168.0.97:27017 after 5000 milliseconds, giving up. 2014-12-29T22:09:59.002+0000 I NETWORK [conn2] SyncClusterConnection connect fail to: mongo_CFG1:27017 errmsg: couldn't connect to server mongo_CFG1:27017 (192.168.0.97), connection attempt failed 2014-12-29T22:09:59.002+0000 I NETWORK [conn2] SyncClusterConnection connecting to [mongo_CFG2:27017] 2014-12-29T22:09:59.003+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:09:59.003+0000 D NETWORK [conn2] connected to server mongo_CFG2:27017 (192.168.0.98) 2014-12-29T22:09:59.003+0000 I NETWORK [conn2] SyncClusterConnection connecting to [mongo_CFG3:27017] 2014-12-29T22:09:59.004+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:09:59.004+0000 D NETWORK [conn2] connected to server mongo_CFG3:27017 (192.168.0.99) 2014-12-29T22:09:59.004+0000 I NETWORK [conn2] unable to set SO_RCVTIMEO 2014-12-29T22:09:59.004+0000 D SHARDING [conn2] Not all config servers config:mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017 are reachable 2014-12-29T22:09:59.004+0000 I NETWORK [conn2] scoped connection to mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017 not being returned to the pool 2014-12-29T22:09:59.004+0000 D SHARDING [conn2] not performing auto-split because not all config servers are up 2014-12-29T22:09:59.004+0000 D SHARDING [conn2] Request::process end ns: test.$cmd msg id: -323597380 op: 2004 attempt: 0 5003ms 2014-12-29T22:09:59.005+0000 D SHARDING [conn3] Request::process begin ns: test.$cmd msg id: -472948768 op: 2004 attempt: 0 2014-12-29T22:09:59.005+0000 D SHARDING [conn3] command: test.$cmd { insert: "test", ordered: true, documents: [ { _id: ObjectId('54a1d0e66357ef0f03000e5a'), mongo_D1: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_D2: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_CFG1: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG2: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG3: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_S1: [ "alex/mongodb_1", "mongos --configdb mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017", 27017 ] } ] } ntoreturn: -1 options: 0 2014-12-29T22:09:59.005+0000 D SHARDING [conn3] starting execution of write batch of size 1 for test.test 2014-12-29T22:09:59.005+0000 D NETWORK [conn3] ReplicaSetMonitor::get xxx 2014-12-29T22:09:59.005+0000 D SHARDING [conn3] sending write batch to mongo_D1:27017: { insert: "test", documents: [ { _id: ObjectId('54a1d0e66357ef0f03000e5a'), mongo_D1: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_D2: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_CFG1: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG2: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG3: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_S1: [ "alex/mongodb_1", "mongos --configdb mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017", 27017 ] } ], ordered: true, metadata: { shardName: "xxx", shardVersion: [ Timestamp 1000|0, ObjectId('54a1d0e128d48a01b1441eea') ], session: 0 } } 2014-12-29T22:09:59.005+0000 D NETWORK [conn3] polling for status of connection to 192.168.0.95:27017, no events 2014-12-29T22:09:59.006+0000 D SHARDING [conn3] write results received from mongo_D1:27017: { ok: 1, n: 0, lastOp: Timestamp 1419890918000|1, electionId: ObjectId('54a1d0e0b12b60a304edb60b'), writeErrors: [ { index: 0, code: 11000, errmsg: "insertDocument :: caused by :: 11000 E11000 duplicate key error index: test.test.$_id_ dup key: { : ObjectId('54a1d0e66357ef0f03000e5a') }" } ] } 2014-12-29T22:09:59.006+0000 D SHARDING [conn3] finished execution of write batch with write errors for test.test 2014-12-29T22:09:59.006+0000 I NETWORK [conn3] SyncClusterConnection connecting to [mongo_CFG1:27017] 2014-12-29T22:09:59.006+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:10:02.640+0000 W NETWORK [UserCacheInvalidator] Failed to connect to 192.168.0.97:27017 after 5000 milliseconds, giving up. 2014-12-29T22:10:02.640+0000 I NETWORK [UserCacheInvalidator] reconnect mongo_CFG1:27017 (192.168.0.97) failed failed couldn't connect to server mongo_CFG1:27017 (192.168.0.97), connection attempt failed 2014-12-29T22:10:02.640+0000 I NETWORK [UserCacheInvalidator] query on admin.$cmd: { _getUserCacheGeneration: "1", help: 1 } failed to: mongo_CFG1:27017 (192.168.0.97) failed exception: socket exception [CONNECT_ERROR] for mongo_CFG1:27017 (192.168.0.97) failed 2014-12-29T22:10:02.640+0000 D - [UserCacheInvalidator] User Assertion: 13053:help failed: { ok: 0.0, errmsg: "no such cmd: _getUserCacheGeneration", code: 59, bad cmd: { _getUserCacheGeneration: "1", help: 1 } } 2014-12-29T22:10:02.640+0000 I NETWORK [UserCacheInvalidator] scoped connection to mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017 not being returned to the pool 2014-12-29T22:10:02.641+0000 W ACCESS [UserCacheInvalidator] An error occurred while fetching current user cache generation to check if user cache needs invalidation: Location13053 help failed: { ok: 0.0, errmsg: "no such cmd: _getUserCacheGeneration", code: 59, bad cmd: { _getUserCacheGeneration: "1", help: 1 } } 2014-12-29T22:10:03.466+0000 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: xxx 2014-12-29T22:10:03.466+0000 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set xxx 2014-12-29T22:10:03.466+0000 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.0.95:27017, no events 2014-12-29T22:10:03.466+0000 D NETWORK [ReplicaSetMonitorWatcher] Updating host mongo_D1:27017 based on ismaster reply: { setName: "xxx", setVersion: 1, ismaster: true, secondary: false, hosts: [ "mongo_D1:27017", "mongo_D2:27017" ], primary: "mongo_D1:27017", me: "mongo_D1:27017", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1419891003466), maxWireVersion: 2, minWireVersion: 0, ok: 1.0 } 2014-12-29T22:10:03.466+0000 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.0.96:27017, no events 2014-12-29T22:10:03.466+0000 D NETWORK [ReplicaSetMonitorWatcher] Updating host mongo_D2:27017 based on ismaster reply: { setName: "xxx", setVersion: 1, ismaster: false, secondary: true, hosts: [ "mongo_D2:27017", "mongo_D1:27017" ], primary: "mongo_D1:27017", me: "mongo_D2:27017", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1419891003466), maxWireVersion: 2, minWireVersion: 0, ok: 1.0 } 2014-12-29T22:10:04.006+0000 W NETWORK [conn3] Failed to connect to 192.168.0.97:27017 after 5000 milliseconds, giving up. 2014-12-29T22:10:04.007+0000 I NETWORK [conn3] SyncClusterConnection connect fail to: mongo_CFG1:27017 errmsg: couldn't connect to server mongo_CFG1:27017 (192.168.0.97), connection attempt failed 2014-12-29T22:10:04.007+0000 I NETWORK [conn3] SyncClusterConnection connecting to [mongo_CFG2:27017] 2014-12-29T22:10:04.007+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:10:04.007+0000 D NETWORK [conn3] connected to server mongo_CFG2:27017 (192.168.0.98) 2014-12-29T22:10:04.007+0000 I NETWORK [conn3] SyncClusterConnection connecting to [mongo_CFG3:27017] 2014-12-29T22:10:04.008+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:10:04.008+0000 D NETWORK [conn3] connected to server mongo_CFG3:27017 (192.168.0.99) 2014-12-29T22:10:04.008+0000 I NETWORK [conn3] unable to set SO_RCVTIMEO 2014-12-29T22:10:04.008+0000 D SHARDING [conn3] Not all config servers config:mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017 are reachable 2014-12-29T22:10:04.008+0000 I NETWORK [conn3] scoped connection to mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017 not being returned to the pool 2014-12-29T22:10:04.008+0000 D SHARDING [conn3] not performing auto-split because not all config servers are up 2014-12-29T22:10:04.008+0000 D SHARDING [conn3] Request::process end ns: test.$cmd msg id: -472948768 op: 2004 attempt: 0 5002ms 2014-12-29T22:10:05.232+0000 D SHARDING [LockPinger] distributed lock pinger 'mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017/mongo_S1:27017:1419890901:1804289383' about to ping. 2014-12-29T22:10:05.233+0000 I NETWORK [LockPinger] SyncClusterConnection connecting to [mongo_CFG1:27017] 2014-12-29T22:10:05.233+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:10:10.013+0000 D SHARDING [conn2] Request::process begin ns: test.test msg id: -869595394 op: 2004 attempt: 0 2014-12-29T22:10:10.013+0000 D SHARDING [conn2] query: test.test { $query: {}, $readPreference: { mode: "primaryPreferred" } } ntoreturn: -1 options: 4 2014-12-29T22:10:10.013+0000 D NETWORK [conn2] creating pcursor over QSpec { ns: "test.test", n2skip: 0, n2return: -1, options: 4, query: { $query: {}, $readPreference: { mode: "primaryPreferred" } }, fields: {} } and CInfo { v_ns: "", filter: {} } 2014-12-29T22:10:10.013+0000 D QUERY [conn2] [QLOG] Beginning planning... ============================= Options = NO_TABLE_SCAN Canonical query: ns=test.test limit=0 skip=0 Tree: $and Sort: {} Proj: {} ============================= 2014-12-29T22:10:10.013+0000 D QUERY [conn2] [QLOG] Index 0 is kp: { _id: 1 } 2014-12-29T22:10:10.013+0000 D QUERY [conn2] [QLOG] Rated tree: $and 2014-12-29T22:10:10.013+0000 D QUERY [conn2] [QLOG] Planner: outputted 0 indexed solutions. 2014-12-29T22:10:10.013+0000 D NETWORK [conn2] initializing over 1 shards required by [test.test @ 1|0||54a1d0e128d48a01b1441eea] 2014-12-29T22:10:10.013+0000 D NETWORK [conn2] initializing on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } 2014-12-29T22:10:10.013+0000 D NETWORK [conn2] polling for status of connection to 192.168.0.95:27017, no events 2014-12-29T22:10:10.013+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:10:10.013+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:10:10.013+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:10:10.013+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:10:10.013+0000 D NETWORK [conn2] dbclient_rs say using secondary or tagged node selection in xxx, read pref is { pref: "primary pref", tags: [ {} ] } (primary : mongo_D1:27017, lastTagged : [not cached]) 2014-12-29T22:10:10.013+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:10:10.013+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:10:10.013+0000 D NETWORK [conn2] dbclient_rs selecting primary node mongo_D1:27017 2014-12-29T22:10:10.013+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:10:10.013+0000 D NETWORK [conn2] initialized query (lazily) on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: { conn: "xxx/mongo_D1:27017,mongo_D2:27017", vinfo: "test.test @ 1|0||54a1d0e128d48a01b1441eea", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } 2014-12-29T22:10:10.013+0000 D NETWORK [conn2] finishing over 1 shards 2014-12-29T22:10:10.013+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:10:10.013+0000 D NETWORK [conn2] finishing on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: { conn: "xxx/mongo_D1:27017,mongo_D2:27017", vinfo: "test.test @ 1|0||54a1d0e128d48a01b1441eea", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } 2014-12-29T22:10:10.013+0000 D NETWORK [conn2] finished on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: { conn: "(done)", vinfo: "test.test @ 1|0||54a1d0e128d48a01b1441eea", cursor: { _id: ObjectId('54a1d0e66357ef0f03000e5a'), mongo_D1: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_D2: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_CFG1: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG2: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG3: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_S1: [ "alex/mongodb_1", "mongos --configdb mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017", 27017 ] }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } 2014-12-29T22:10:10.013+0000 D SHARDING [conn2] Request::process end ns: test.test msg id: -869595394 op: 2004 attempt: 0 0ms 2014-12-29T22:10:10.014+0000 D SHARDING [conn3] Request::process begin ns: test.test msg id: 1064595279 op: 2004 attempt: 0 2014-12-29T22:10:10.014+0000 D SHARDING [conn3] query: test.test { $query: {}, $readPreference: { mode: "primaryPreferred" } } ntoreturn: -1 options: 4 2014-12-29T22:10:10.014+0000 D NETWORK [conn3] creating pcursor over QSpec { ns: "test.test", n2skip: 0, n2return: -1, options: 4, query: { $query: {}, $readPreference: { mode: "primaryPreferred" } }, fields: {} } and CInfo { v_ns: "", filter: {} } 2014-12-29T22:10:10.014+0000 D QUERY [conn3] [QLOG] Beginning planning... ============================= Options = NO_TABLE_SCAN Canonical query: ns=test.test limit=0 skip=0 Tree: $and Sort: {} Proj: {} ============================= 2014-12-29T22:10:10.014+0000 D QUERY [conn3] [QLOG] Index 0 is kp: { _id: 1 } 2014-12-29T22:10:10.014+0000 D QUERY [conn3] [QLOG] Rated tree: $and 2014-12-29T22:10:10.014+0000 D QUERY [conn3] [QLOG] Planner: outputted 0 indexed solutions. 2014-12-29T22:10:10.014+0000 D NETWORK [conn3] initializing over 1 shards required by [test.test @ 1|0||54a1d0e128d48a01b1441eea] 2014-12-29T22:10:10.014+0000 D NETWORK [conn3] initializing on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } 2014-12-29T22:10:10.014+0000 D NETWORK [conn3] ReplicaSetMonitor::get xxx 2014-12-29T22:10:10.014+0000 D NETWORK [conn3] ReplicaSetMonitor::get xxx 2014-12-29T22:10:10.014+0000 D NETWORK [conn3] ReplicaSetMonitor::get xxx 2014-12-29T22:10:10.014+0000 D NETWORK [conn3] ReplicaSetMonitor::get xxx 2014-12-29T22:10:10.014+0000 D NETWORK [conn3] dbclient_rs say using secondary or tagged node selection in xxx, read pref is { pref: "primary pref", tags: [ {} ] } (primary : mongo_D1:27017, lastTagged : [not cached]) 2014-12-29T22:10:10.014+0000 D NETWORK [conn3] ReplicaSetMonitor::get xxx 2014-12-29T22:10:10.014+0000 D NETWORK [conn3] ReplicaSetMonitor::get xxx 2014-12-29T22:10:10.014+0000 D NETWORK [conn3] dbclient_rs selecting primary node mongo_D1:27017 2014-12-29T22:10:10.014+0000 D NETWORK [conn3] ReplicaSetMonitor::get xxx 2014-12-29T22:10:10.014+0000 D NETWORK [conn3] initialized query (lazily) on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: { conn: "xxx/mongo_D1:27017,mongo_D2:27017", vinfo: "test.test @ 1|0||54a1d0e128d48a01b1441eea", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } 2014-12-29T22:10:10.015+0000 D NETWORK [conn3] finishing over 1 shards 2014-12-29T22:10:10.015+0000 D NETWORK [conn3] ReplicaSetMonitor::get xxx 2014-12-29T22:10:10.015+0000 D NETWORK [conn3] finishing on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: { conn: "xxx/mongo_D1:27017,mongo_D2:27017", vinfo: "test.test @ 1|0||54a1d0e128d48a01b1441eea", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } 2014-12-29T22:10:10.015+0000 D NETWORK [conn3] finished on shard xxx:xxx/mongo_D1:27017,mongo_D2:27017, current connection state is { state: { conn: "(done)", vinfo: "test.test @ 1|0||54a1d0e128d48a01b1441eea", cursor: { _id: ObjectId('54a1d0e66357ef0f03000e5a'), mongo_D1: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_D2: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_CFG1: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG2: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG3: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_S1: [ "alex/mongodb_1", "mongos --configdb mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017", 27017 ] }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } 2014-12-29T22:10:10.015+0000 D SHARDING [conn3] Request::process end ns: test.test msg id: 1064595279 op: 2004 attempt: 0 0ms 2014-12-29T22:10:10.016+0000 D SHARDING [conn2] Request::process begin ns: test.$cmd msg id: 1089883170 op: 2004 attempt: 0 2014-12-29T22:10:10.016+0000 D SHARDING [conn2] command: test.$cmd { insert: "test", ordered: true, documents: [ { _id: ObjectId('54a1d0e66357ef0f03000e5a'), mongo_D1: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_D2: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_CFG1: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG2: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG3: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_S1: [ "alex/mongodb_1", "mongos --configdb mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017", 27017 ] } ] } ntoreturn: -1 options: 0 2014-12-29T22:10:10.016+0000 D SHARDING [conn2] starting execution of write batch of size 1 for test.test 2014-12-29T22:10:10.016+0000 D NETWORK [conn2] ReplicaSetMonitor::get xxx 2014-12-29T22:10:10.016+0000 D SHARDING [conn2] sending write batch to mongo_D1:27017: { insert: "test", documents: [ { _id: ObjectId('54a1d0e66357ef0f03000e5a'), mongo_D1: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_D2: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_CFG1: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG2: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG3: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_S1: [ "alex/mongodb_1", "mongos --configdb mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017", 27017 ] } ], ordered: true, metadata: { shardName: "xxx", shardVersion: [ Timestamp 1000|0, ObjectId('54a1d0e128d48a01b1441eea') ], session: 0 } } 2014-12-29T22:10:10.016+0000 D NETWORK [conn2] polling for status of connection to 192.168.0.95:27017, no events 2014-12-29T22:10:10.016+0000 D SHARDING [conn2] write results received from mongo_D1:27017: { ok: 1, n: 0, lastOp: Timestamp 1419890918000|1, electionId: ObjectId('54a1d0e0b12b60a304edb60b'), writeErrors: [ { index: 0, code: 11000, errmsg: "insertDocument :: caused by :: 11000 E11000 duplicate key error index: test.test.$_id_ dup key: { : ObjectId('54a1d0e66357ef0f03000e5a') }" } ] } 2014-12-29T22:10:10.016+0000 D SHARDING [conn2] finished execution of write batch with write errors for test.test 2014-12-29T22:10:10.016+0000 I NETWORK [conn2] SyncClusterConnection connecting to [mongo_CFG1:27017] 2014-12-29T22:10:10.017+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:10:10.233+0000 W NETWORK [LockPinger] Failed to connect to 192.168.0.97:27017 after 5000 milliseconds, giving up. 2014-12-29T22:10:10.233+0000 I NETWORK [LockPinger] SyncClusterConnection connect fail to: mongo_CFG1:27017 errmsg: couldn't connect to server mongo_CFG1:27017 (192.168.0.97), connection attempt failed 2014-12-29T22:10:10.233+0000 I NETWORK [LockPinger] SyncClusterConnection connecting to [mongo_CFG2:27017] 2014-12-29T22:10:10.234+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:10:10.234+0000 D NETWORK [LockPinger] connected to server mongo_CFG2:27017 (192.168.0.98) 2014-12-29T22:10:10.234+0000 I NETWORK [LockPinger] SyncClusterConnection connecting to [mongo_CFG3:27017] 2014-12-29T22:10:10.234+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:10:10.234+0000 D NETWORK [LockPinger] connected to server mongo_CFG3:27017 (192.168.0.99) 2014-12-29T22:10:10.235+0000 I NETWORK [LockPinger] unable to set SO_RCVTIMEO 2014-12-29T22:10:10.236+0000 I NETWORK [LockPinger] trying reconnect to mongo_CFG1:27017 (192.168.0.97) failed 2014-12-29T22:10:10.236+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:10:11.120+0000 I NETWORK [Balancer] Socket recv() errno:110 Connection timed out 192.168.0.97:27017 2014-12-29T22:10:11.120+0000 I NETWORK [Balancer] SocketException: remote: 192.168.0.97:27017 error: 9001 socket exception [RECV_ERROR] server [192.168.0.97:27017] 2014-12-29T22:10:11.120+0000 I NETWORK [Balancer] DBClientCursor::init call() failed 2014-12-29T22:10:11.120+0000 I NETWORK [Balancer] query on config.locks: { _id: "balancer" } failed to: mongo_CFG1:27017 (192.168.0.97) failed no data 2014-12-29T22:10:11.120+0000 D SHARDING [Balancer] about to acquire distributed lock 'balancer/mongo_S1:27017:1419890901:1804289383' 2014-12-29T22:10:11.120+0000 D SHARDING [Balancer] trying to acquire lock { _id: "balancer", state: 0, ts: ObjectId('54a1d10028d48a01b1441ef2') } with details { state: 1, who: "mongo_S1:27017:1419890901:1804289383:Balancer:1681692777", process: "mongo_S1:27017:1419890901:1804289383", when: new Date(1419891011120), why: "doing balance round", ts: ObjectId('54a1d14328d48a01b1441ef6') } 2014-12-29T22:10:11.121+0000 I NETWORK [Balancer] trying reconnect to mongo_CFG1:27017 (192.168.0.97) failed 2014-12-29T22:10:11.122+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:10:13.466+0000 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: xxx 2014-12-29T22:10:13.466+0000 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set xxx 2014-12-29T22:10:13.466+0000 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.0.95:27017, no events 2014-12-29T22:10:13.467+0000 D NETWORK [ReplicaSetMonitorWatcher] Updating host mongo_D1:27017 based on ismaster reply: { setName: "xxx", setVersion: 1, ismaster: true, secondary: false, hosts: [ "mongo_D1:27017", "mongo_D2:27017" ], primary: "mongo_D1:27017", me: "mongo_D1:27017", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1419891013467), maxWireVersion: 2, minWireVersion: 0, ok: 1.0 } 2014-12-29T22:10:13.467+0000 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.0.96:27017, no events 2014-12-29T22:10:13.467+0000 D NETWORK [ReplicaSetMonitorWatcher] Updating host mongo_D2:27017 based on ismaster reply: { setName: "xxx", setVersion: 1, ismaster: false, secondary: true, hosts: [ "mongo_D2:27017", "mongo_D1:27017" ], primary: "mongo_D1:27017", me: "mongo_D2:27017", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1419891013467), maxWireVersion: 2, minWireVersion: 0, ok: 1.0 } 2014-12-29T22:10:15.017+0000 W NETWORK [conn2] Failed to connect to 192.168.0.97:27017 after 5000 milliseconds, giving up. 2014-12-29T22:10:15.017+0000 I NETWORK [conn2] SyncClusterConnection connect fail to: mongo_CFG1:27017 errmsg: couldn't connect to server mongo_CFG1:27017 (192.168.0.97), connection attempt failed 2014-12-29T22:10:15.017+0000 I NETWORK [conn2] SyncClusterConnection connecting to [mongo_CFG2:27017] 2014-12-29T22:10:15.017+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:10:15.018+0000 D NETWORK [conn2] connected to server mongo_CFG2:27017 (192.168.0.98) 2014-12-29T22:10:15.018+0000 I NETWORK [conn2] SyncClusterConnection connecting to [mongo_CFG3:27017] 2014-12-29T22:10:15.018+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:10:15.018+0000 D NETWORK [conn2] connected to server mongo_CFG3:27017 (192.168.0.99) 2014-12-29T22:10:15.018+0000 I NETWORK [conn2] unable to set SO_RCVTIMEO 2014-12-29T22:10:15.018+0000 D SHARDING [conn2] Not all config servers config:mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017 are reachable 2014-12-29T22:10:15.018+0000 I NETWORK [conn2] scoped connection to mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017 not being returned to the pool 2014-12-29T22:10:15.018+0000 D SHARDING [conn2] not performing auto-split because not all config servers are up 2014-12-29T22:10:15.018+0000 D SHARDING [conn2] Request::process end ns: test.$cmd msg id: 1089883170 op: 2004 attempt: 0 5002ms 2014-12-29T22:10:15.019+0000 D SHARDING [conn3] Request::process begin ns: test.$cmd msg id: -425475812 op: 2004 attempt: 0 2014-12-29T22:10:15.019+0000 D SHARDING [conn3] command: test.$cmd { insert: "test", ordered: true, documents: [ { _id: ObjectId('54a1d0e66357ef0f03000e5a'), mongo_D1: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_D2: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_CFG1: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG2: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG3: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_S1: [ "alex/mongodb_1", "mongos --configdb mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017", 27017 ] } ] } ntoreturn: -1 options: 0 2014-12-29T22:10:15.019+0000 D SHARDING [conn3] starting execution of write batch of size 1 for test.test 2014-12-29T22:10:15.020+0000 D NETWORK [conn3] ReplicaSetMonitor::get xxx 2014-12-29T22:10:15.020+0000 D SHARDING [conn3] sending write batch to mongo_D1:27017: { insert: "test", documents: [ { _id: ObjectId('54a1d0e66357ef0f03000e5a'), mongo_D1: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_D2: [ "alex/mongodb_1", "mongod --smallfiles --replSet xxx", 27017 ], mongo_CFG1: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG2: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_CFG3: [ "alex/mongodb_1", "mongod --smallfiles", 27017 ], mongo_S1: [ "alex/mongodb_1", "mongos --configdb mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017", 27017 ] } ], ordered: true, metadata: { shardName: "xxx", shardVersion: [ Timestamp 1000|0, ObjectId('54a1d0e128d48a01b1441eea') ], session: 0 } } 2014-12-29T22:10:15.020+0000 D NETWORK [conn3] polling for status of connection to 192.168.0.95:27017, no events 2014-12-29T22:10:15.020+0000 D SHARDING [conn3] write results received from mongo_D1:27017: { ok: 1, n: 0, lastOp: Timestamp 1419890918000|1, electionId: ObjectId('54a1d0e0b12b60a304edb60b'), writeErrors: [ { index: 0, code: 11000, errmsg: "insertDocument :: caused by :: 11000 E11000 duplicate key error index: test.test.$_id_ dup key: { : ObjectId('54a1d0e66357ef0f03000e5a') }" } ] } 2014-12-29T22:10:15.020+0000 D SHARDING [conn3] finished execution of write batch with write errors for test.test 2014-12-29T22:10:15.020+0000 I NETWORK [conn3] SyncClusterConnection connecting to [mongo_CFG1:27017] 2014-12-29T22:10:15.020+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:10:15.236+0000 W NETWORK [LockPinger] Failed to connect to 192.168.0.97:27017 after 5000 milliseconds, giving up. 2014-12-29T22:10:15.236+0000 I NETWORK [LockPinger] reconnect mongo_CFG1:27017 (192.168.0.97) failed failed couldn't connect to server mongo_CFG1:27017 (192.168.0.97), connection attempt failed 2014-12-29T22:10:15.252+0000 I NETWORK [LockPinger] scoped connection to mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017 not being returned to the pool 2014-12-29T22:10:15.252+0000 W SHARDING [LockPinger] distributed lock pinger 'mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017/mongo_S1:27017:1419890901:1804289383' detected an exception while pinging. :: caused by :: SyncClusterConnection::update prepare failed: mongo_CFG1:27017 (192.168.0.97) failed:9001 socket exception [CONNECT_ERROR] server [mongo_CFG1:27017 (192.168.0.97) failed] 2014-12-29T22:10:16.122+0000 W NETWORK [Balancer] Failed to connect to 192.168.0.97:27017 after 5000 milliseconds, giving up. 2014-12-29T22:10:16.122+0000 I NETWORK [Balancer] reconnect mongo_CFG1:27017 (192.168.0.97) failed failed couldn't connect to server mongo_CFG1:27017 (192.168.0.97), connection attempt failed 2014-12-29T22:10:16.139+0000 I NETWORK [Balancer] scoped connection to mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017 not being returned to the pool 2014-12-29T22:10:16.139+0000 I SHARDING [Balancer] caught exception while doing balance: exception creating distributed lock balancer/mongo_S1:27017:1419890901:1804289383 :: caused by :: SyncClusterConnection::update prepare failed: mongo_CFG1:27017 (192.168.0.97) failed:9001 socket exception [CONNECT_ERROR] server [mongo_CFG1:27017 (192.168.0.97) failed] 2014-12-29T22:10:16.139+0000 D SHARDING [Balancer] *** End of balancing round 2014-12-29T22:10:16.139+0000 I NETWORK [Balancer] SyncClusterConnection connecting to [mongo_CFG1:27017] 2014-12-29T22:10:16.139+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:10:20.021+0000 W NETWORK [conn3] Failed to connect to 192.168.0.97:27017 after 5000 milliseconds, giving up. 2014-12-29T22:10:20.021+0000 I NETWORK [conn3] SyncClusterConnection connect fail to: mongo_CFG1:27017 errmsg: couldn't connect to server mongo_CFG1:27017 (192.168.0.97), connection attempt failed 2014-12-29T22:10:20.021+0000 I NETWORK [conn3] SyncClusterConnection connecting to [mongo_CFG2:27017] 2014-12-29T22:10:20.021+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:10:20.021+0000 D NETWORK [conn3] connected to server mongo_CFG2:27017 (192.168.0.98) 2014-12-29T22:10:20.022+0000 I NETWORK [conn3] SyncClusterConnection connecting to [mongo_CFG3:27017] 2014-12-29T22:10:20.022+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:10:20.022+0000 D NETWORK [conn3] connected to server mongo_CFG3:27017 (192.168.0.99) 2014-12-29T22:10:20.022+0000 I NETWORK [conn3] unable to set SO_RCVTIMEO 2014-12-29T22:10:20.022+0000 D SHARDING [conn3] Not all config servers config:mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017 are reachable 2014-12-29T22:10:20.022+0000 I NETWORK [conn3] scoped connection to mongo_CFG1:27017,mongo_CFG2:27017,mongo_CFG3:27017 not being returned to the pool 2014-12-29T22:10:20.022+0000 D SHARDING [conn3] not performing auto-split because not all config servers are up 2014-12-29T22:10:20.023+0000 D SHARDING [conn3] Request::process end ns: test.$cmd msg id: -425475812 op: 2004 attempt: 0 5003ms 2014-12-29T22:10:21.139+0000 W NETWORK [Balancer] Failed to connect to 192.168.0.97:27017 after 5000 milliseconds, giving up. 2014-12-29T22:10:21.140+0000 I NETWORK [Balancer] SyncClusterConnection connect fail to: mongo_CFG1:27017 errmsg: couldn't connect to server mongo_CFG1:27017 (192.168.0.97), connection attempt failed 2014-12-29T22:10:21.140+0000 I NETWORK [Balancer] SyncClusterConnection connecting to [mongo_CFG2:27017] 2014-12-29T22:10:21.140+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:10:21.140+0000 D NETWORK [Balancer] connected to server mongo_CFG2:27017 (192.168.0.98) 2014-12-29T22:10:21.140+0000 I NETWORK [Balancer] SyncClusterConnection connecting to [mongo_CFG3:27017] 2014-12-29T22:10:21.141+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2014-12-29T22:10:21.141+0000 D NETWORK [Balancer] connected to server mongo_CFG3:27017 (192.168.0.99) 2014-12-29T22:10:21.141+0000 I NETWORK [Balancer] unable to set SO_RCVTIMEO 2014-12-29T22:10:21.141+0000 D SHARDING [Balancer] about to log balancer result: { server: "mongo_S1", time: new Date(1419891021141), what: "balancer.round", details: { executionTimeMillis: 61445, errorOccured: true, errmsg: "exception creating distributed lock balancer/mongo_S1:27017:1419890901:1804289383 :: caused by :: SyncClusterConnection::update prepare failed: mongo..." } } 2014-12-29T22:10:21.141+0000 D NETWORK [Balancer] creating new connection to:mongo_CFG1:27017 2014-12-29T22:10:21.141+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG