2016-07-19T17:07:02.519+0200 I NETWORK [conn1200571] end connection 172.16.96.81:38149 (202 connections now open) 2016-07-19T17:07:04.428+0200 I NETWORK [mongosMain] connection accepted from 172.16.96.139:52680 #1200724 (203 connections now open) 2016-07-19T17:07:04.433+0200 I SHARDING [conn1200724] couldn't find database [kay2] in config db 2016-07-19T17:07:04.730+0200 I SHARDING [conn1200724] put [kay2] on: reise_shard01:reise_shard01/mongo-035.ipx:27017,mongo-036.ipx:27017 2016-07-19T17:07:04.822+0200 I NETWORK [mongosMain] connection accepted from 172.16.96.139:52681 #1200725 (204 connections now open) 2016-07-19T17:07:04.836+0200 I NETWORK [mongosMain] connection accepted from 172.16.96.139:52682 #1200726 (205 connections now open) 2016-07-19T17:07:04.879+0200 I NETWORK [mongosMain] connection accepted from 172.16.96.139:52683 #1200727 (206 connections now open) 2016-07-19T17:07:08.760+0200 I SHARDING [Balancer] distributed lock 'balancer/mongo-router-01:27017:1453368611:1804289383' acquired, ts : 578e421c347813776cc94532 2016-07-19T17:07:08.832+0200 I SHARDING [Balancer] ns: hotel_prod_006.offer going to move { _id: "hotel_prod_006.offer-search.hotelId_1search.searchUid_0", ns: "hotel_prod_006.offer", min: { search.hotelId: 1, search.searchUid: 0 }, max: { search.hotelId: 394, search.searchUid: 1468075457886000 }, version: Timestamp 89000|1, versionEpoch: ObjectId('576a6764231d661c09112ad1'), lastmod: Timestamp 89000|1, lastmodEpoch: ObjectId('576a6764231d661c09112ad1'), shard: "reise_shard02" } from: reise_shard02 to: reise_shard01 tag [] 2016-07-19T17:07:08.836+0200 I SHARDING [Balancer] moving chunk ns: hotel_prod_006.offer moving ( ns: hotel_prod_006.offer, shard: reise_shard02:reise_shard02/mongo-037.ipx:27017,mongo-038.ipx:27017, lastmod: 89|1||000000000000000000000000, min: { search.hotelId: 1, search.searchUid: 0 }, max: { search.hotelId: 394, search.searchUid: 1468075457886000 }) reise_shard02:reise_shard02/mongo-037.ipx:27017,mongo-038.ipx:27017 -> reise_shard01:reise_shard01/mongo-035.ipx:27017,mongo-036.ipx:27017 2016-07-19T17:07:09.430+0200 I NETWORK [mongosMain] connection accepted from 172.16.96.139:52684 #1200728 (207 connections now open) 2016-07-19T17:07:09.491+0200 I SHARDING [Balancer] moveChunk result: { cause: { ok: 0.0, errmsg: "migrate already in progress", $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('56a0a344b5288f9897220e23') } }, ok: 0.0, errmsg: "moveChunk failed to engage TO-shard in the data transfer: migrate already in progress", $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('56a0a35f00d0725cbecc2f18') } } 2016-07-19T17:07:09.492+0200 I SHARDING [Balancer] balancer move failed: { cause: { ok: 0.0, errmsg: "migrate already in progress", $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('56a0a344b5288f9897220e23') } }, ok: 0.0, errmsg: "moveChunk failed to engage TO-shard in the data transfer: migrate already in progress", $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('56a0a35f00d0725cbecc2f18') } } from: reise_shard02 to: reise_shard01 chunk: min: { search.hotelId: 1, search.searchUid: 0 } max: { search.hotelId: 394, search.searchUid: 1468075457886000 } 2016-07-19T17:07:09.831+0200 I SHARDING [Balancer] distributed lock 'balancer/mongo-router-01:27017:1453368611:1804289383' unlocked. 2016-07-19T17:07:18.057+0200 I NETWORK [mongosMain] connection accepted from 172.16.96.81:39806 #1200729 (208 connections now open) 2016-07-19T17:07:19.932+0200 I SHARDING [LockPinger] cluster mongo-router-01.hotel02.pro00.eu.idealo.com:27019,mongo-router-02.hotel02.pro00.eu.idealo.com:27019,mongo-router-03.hotel02.pro00.eu.idealo.com:27019 pinged successfully at Tue Jul 19 17:07:19 2016 by distributed lock pinger 'mongo-router-01.hotel02.pro00.eu.idealo.com:27019,mongo-router-02.hotel02.pro00.eu.idealo.com:27019,mongo-router-03.hotel02.pro00.eu.idealo.com:27019/mongo-router-01:27017:1453368611:1804289383', sleeping for 30000ms 2016-07-19T17:07:21.880+0200 I NETWORK [conn1200728] end connection 172.16.96.139:52684 (207 connections now open) 2016-07-19T17:07:21.880+0200 I NETWORK [conn1200727] end connection 172.16.96.139:52683 (207 connections now open) 2016-07-19T17:07:21.880+0200 I NETWORK [conn1200725] end connection 172.16.96.139:52681 (205 connections now open) 2016-07-19T17:07:22.272+0200 I NETWORK [conn1200726] end connection 172.16.96.139:52682 (204 connections now open) 2016-07-19T17:07:22.716+0200 I NETWORK [conn1200724] end connection 172.16.96.139:52680 (203 connections now open) 2016-07-19T17:07:26.549+0200 I NETWORK [mongosMain] connection accepted from 172.16.0.81:37894 #1200730 (204 connections now open) 2016-07-19T17:07:26.550+0200 I NETWORK [conn1200730] end connection 172.16.0.81:37894 (203 connections now open) 2016-07-19T17:07:28.804+0200 I NETWORK [mongosMain] connection accepted from 127.0.0.1:38559 #1200731 (204 connections now open) 2016-07-19T17:07:28.898+0200 I NETWORK [conn1200731] end connection 127.0.0.1:38559 (203 connections now open) 2016-07-19T17:07:30.029+0200 I SHARDING [Balancer] distributed lock 'balancer/mongo-router-01:27017:1453368611:1804289383' acquired, ts : 578e4231347813776cc94534 2016-07-19T17:07:30.095+0200 I SHARDING [Balancer] ns: hotel_prod_006.offer going to move { _id: "hotel_prod_006.offer-search.hotelId_1search.searchUid_0", ns: "hotel_prod_006.offer", min: { search.hotelId: 1, search.searchUid: 0 }, max: { search.hotelId: 394, search.searchUid: 1468075457886000 }, version: Timestamp 89000|1, versionEpoch: ObjectId('576a6764231d661c09112ad1'), lastmod: Timestamp 89000|1, lastmodEpoch: ObjectId('576a6764231d661c09112ad1'), shard: "reise_shard02" } from: reise_shard02 to: reise_shard01 tag [] 2016-07-19T17:07:30.099+0200 I SHARDING [Balancer] moving chunk ns: hotel_prod_006.offer moving ( ns: hotel_prod_006.offer, shard: reise_shard02:reise_shard02/mongo-037.ipx:27017,mongo-038.ipx:27017, lastmod: 89|1||000000000000000000000000, min: { search.hotelId: 1, search.searchUid: 0 }, max: { search.hotelId: 394, search.searchUid: 1468075457886000 }) reise_shard02:reise_shard02/mongo-037.ipx:27017,mongo-038.ipx:27017 -> reise_shard01:reise_shard01/mongo-035.ipx:27017,mongo-036.ipx:27017 2016-07-19T17:07:30.736+0200 I NETWORK [conn1200686] end connection 172.16.96.24:49777 (202 connections now open) 2016-07-19T17:07:30.839+0200 I SHARDING [Balancer] moveChunk result: { cause: { ok: 0.0, errmsg: "migrate already in progress", $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('56a0a344b5288f9897220e23') } }, ok: 0.0, errmsg: "moveChunk failed to engage TO-shard in the data transfer: migrate already in progress", $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('56a0a35f00d0725cbecc2f18') } } 2016-07-19T17:07:30.839+0200 I SHARDING [Balancer] balancer move failed: { cause: { ok: 0.0, errmsg: "migrate already in progress", $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('56a0a344b5288f9897220e23') } }, ok: 0.0, errmsg: "moveChunk failed to engage TO-shard in the data transfer: migrate already in progress", $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('56a0a35f00d0725cbecc2f18') } } from: reise_shard02 to: reise_shard01 chunk: min: { search.hotelId: 1, search.searchUid: 0 } max: { search.hotelId: 394, search.searchUid: 1468075457886000 } 2016-07-19T17:07:31.416+0200 I SHARDING [Balancer] distributed lock 'balancer/mongo-router-01:27017:1453368611:1804289383' unlocked. 2016-07-19T17:07:40.447+0200 I NETWORK [conn1200629] end connection 172.16.96.108:51595 (201 connections now open) 2016-07-19T17:07:41.495+0200 I SHARDING [Balancer] distributed lock 'balancer/mongo-router-01:27017:1453368611:1804289383' acquired, ts : 578e423d347813776cc94536 2016-07-19T17:07:41.561+0200 I SHARDING [Balancer] ns: hotel_prod_006.offer going to move { _id: "hotel_prod_006.offer-search.hotelId_1search.searchUid_0", ns: "hotel_prod_006.offer", min: { search.hotelId: 1, search.searchUid: 0 }, max: { search.hotelId: 394, search.searchUid: 1468075457886000 }, version: Timestamp 89000|1, versionEpoch: ObjectId('576a6764231d661c09112ad1'), lastmod: Timestamp 89000|1, lastmodEpoch: ObjectId('576a6764231d661c09112ad1'), shard: "reise_shard02" } from: reise_shard02 to: reise_shard01 tag [] 2016-07-19T17:07:41.564+0200 I SHARDING [Balancer] moving chunk ns: hotel_prod_006.offer moving ( ns: hotel_prod_006.offer, shard: reise_shard02:reise_shard02/mongo-037.ipx:27017,mongo-038.ipx:27017, lastmod: 89|1||000000000000000000000000, min: { search.hotelId: 1, search.searchUid: 0 }, max: { search.hotelId: 394, search.searchUid: 1468075457886000 }) reise_shard02:reise_shard02/mongo-037.ipx:27017,mongo-038.ipx:27017 -> reise_shard01:reise_shard01/mongo-035.ipx:27017,mongo-036.ipx:27017 2016-07-19T17:07:42.303+0200 I SHARDING [Balancer] moveChunk result: { cause: { note: "from execCommand", ok: 0.0, errmsg: "not master" }, ok: 0.0, errmsg: "moveChunk failed to engage TO-shard in the data transfer: not master", $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('56a0a35f00d0725cbecc2f18') } } 2016-07-19T17:07:42.304+0200 I SHARDING [Balancer] balancer move failed: { cause: { note: "from execCommand", ok: 0.0, errmsg: "not master" }, ok: 0.0, errmsg: "moveChunk failed to engage TO-shard in the data transfer: not master", $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('56a0a35f00d0725cbecc2f18') } } from: reise_shard02 to: reise_shard01 chunk: min: { search.hotelId: 1, search.searchUid: 0 } max: { search.hotelId: 394, search.searchUid: 1468075457886000 } 2016-07-19T17:07:42.584+0200 I SHARDING [Balancer] distributed lock 'balancer/mongo-router-01:27017:1453368611:1804289383' unlocked. 2016-07-19T17:07:42.618+0200 I NETWORK [conn1200692] end connection 172.16.96.77:56826 (200 connections now open) 2016-07-19T17:07:46.453+0200 I NETWORK [conn1200624] end connection 172.16.96.108:48369 (199 connections now open) 2016-07-19T17:07:49.962+0200 I SHARDING [LockPinger] cluster mongo-router-01.hotel02.pro00.eu.idealo.com:27019,mongo-router-02.hotel02.pro00.eu.idealo.com:27019,mongo-router-03.hotel02.pro00.eu.idealo.com:27019 pinged successfully at Tue Jul 19 17:07:49 2016 by distributed lock pinger 'mongo-router-01.hotel02.pro00.eu.idealo.com:27019,mongo-router-02.hotel02.pro00.eu.idealo.com:27019,mongo-router-03.hotel02.pro00.eu.idealo.com:27019/mongo-router-01:27017:1453368611:1804289383', sleeping for 30000ms 2016-07-19T17:07:52.667+0200 I SHARDING [Balancer] distributed lock 'balancer/mongo-router-01:27017:1453368611:1804289383' acquired, ts : 578e4248347813776cc94538 2016-07-19T17:07:52.733+0200 I SHARDING [Balancer] ns: hotel_prod_006.offer going to move { _id: "hotel_prod_006.offer-search.hotelId_1search.searchUid_0", ns: "hotel_prod_006.offer", min: { search.hotelId: 1, search.searchUid: 0 }, max: { search.hotelId: 394, search.searchUid: 1468075457886000 }, version: Timestamp 89000|1, versionEpoch: ObjectId('576a6764231d661c09112ad1'), lastmod: Timestamp 89000|1, lastmodEpoch: ObjectId('576a6764231d661c09112ad1'), shard: "reise_shard02" } from: reise_shard02 to: reise_shard01 tag [] 2016-07-19T17:07:52.737+0200 I SHARDING [Balancer] moving chunk ns: hotel_prod_006.offer moving ( ns: hotel_prod_006.offer, shard: reise_shard02:reise_shard02/mongo-037.ipx:27017,mongo-038.ipx:27017, lastmod: 89|1||000000000000000000000000, min: { search.hotelId: 1, search.searchUid: 0 }, max: { search.hotelId: 394, search.searchUid: 1468075457886000 }) reise_shard02:reise_shard02/mongo-037.ipx:27017,mongo-038.ipx:27017 -> reise_shard01:reise_shard01/mongo-035.ipx:27017,mongo-036.ipx:27017 2016-07-19T17:07:53.480+0200 I SHARDING [Balancer] moveChunk result: { cause: { ok: 0.0, errmsg: "migrate already in progress", $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('56a0a344b5288f9897220e23') } }, ok: 0.0, errmsg: "moveChunk failed to engage TO-shard in the data transfer: migrate already in progress", $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('56a0a35f00d0725cbecc2f18') } } 2016-07-19T17:07:53.481+0200 I SHARDING [Balancer] balancer move failed: { cause: { ok: 0.0, errmsg: "migrate already in progress", $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('56a0a344b5288f9897220e23') } }, ok: 0.0, errmsg: "moveChunk failed to engage TO-shard in the data transfer: migrate already in progress", $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('56a0a35f00d0725cbecc2f18') } } from: reise_shard02 to: reise_shard01 chunk: min: { search.hotelId: 1, search.searchUid: 0 } max: { search.hotelId: 394, search.searchUid: 1468075457886000 } 2016-07-19T17:07:53.743+0200 I SHARDING [Balancer] distributed lock 'balancer/mongo-router-01:27017:1453368611:1804289383' unlocked. 2016-07-19T17:07:54.370+0200 I COMMAND [conn1200719] DROP DATABASE: kay2 2016-07-19T17:07:54.370+0200 I SHARDING [conn1200719] erased database kay2 from local registry 2016-07-19T17:07:54.372+0200 I SHARDING [conn1200719] DBConfig::dropDatabase: kay2 2016-07-19T17:07:54.372+0200 I SHARDING [conn1200719] about to log metadata event: { _id: "mongo-router-01-2016-07-19T15:07:54-578e424a347813776cc9453a", server: "mongo-router-01", clientAddr: "N/A", time: new Date(1468940874372), what: "dropDatabase.start", ns: "kay2", details: {} } 2016-07-19T17:07:54.835+0200 I SHARDING [conn1200719] DBConfig::dropDatabase: kay2 dropped sharded collections: 0 2016-07-19T17:07:54.838+0200 I NETWORK [conn1200719] scoped connection to reise_shard01/mongo-035.ipx:27017,mongo-036.ipx:27017 not being returned to the pool